00:00:00.001 Started by upstream project "autotest-nightly" build number 4278 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3641 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.014 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.018 The recommended git tool is: git 00:00:00.019 using credential 00000000-0000-0000-0000-000000000002 00:00:00.022 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.038 Fetching changes from the remote Git repository 00:00:00.049 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.064 Using shallow fetch with depth 1 00:00:00.064 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.064 > git --version # timeout=10 00:00:00.083 > git --version # 'git version 2.39.2' 00:00:00.083 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.102 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.102 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.990 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.002 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.015 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.015 > git config core.sparsecheckout # timeout=10 00:00:03.027 > git read-tree -mu HEAD # timeout=10 00:00:03.043 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.062 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.063 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.169 [Pipeline] Start of Pipeline 00:00:03.185 [Pipeline] library 00:00:03.187 Loading library shm_lib@master 00:00:03.187 Library shm_lib@master is cached. Copying from home. 00:00:03.203 [Pipeline] node 00:00:03.214 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.215 [Pipeline] { 00:00:03.225 [Pipeline] catchError 00:00:03.227 [Pipeline] { 00:00:03.239 [Pipeline] wrap 00:00:03.248 [Pipeline] { 00:00:03.254 [Pipeline] stage 00:00:03.256 [Pipeline] { (Prologue) 00:00:03.269 [Pipeline] echo 00:00:03.271 Node: VM-host-WFP7 00:00:03.275 [Pipeline] cleanWs 00:00:03.284 [WS-CLEANUP] Deleting project workspace... 00:00:03.284 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.291 [WS-CLEANUP] done 00:00:03.467 [Pipeline] setCustomBuildProperty 00:00:03.573 [Pipeline] httpRequest 00:00:04.167 [Pipeline] echo 00:00:04.169 Sorcerer 10.211.164.20 is alive 00:00:04.178 [Pipeline] retry 00:00:04.180 [Pipeline] { 00:00:04.191 [Pipeline] httpRequest 00:00:04.195 HttpMethod: GET 00:00:04.195 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.195 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.196 Response Code: HTTP/1.1 200 OK 00:00:04.196 Success: Status code 200 is in the accepted range: 200,404 00:00:04.197 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.343 [Pipeline] } 00:00:04.359 [Pipeline] // retry 00:00:04.367 [Pipeline] sh 00:00:04.648 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.661 [Pipeline] httpRequest 00:00:05.001 [Pipeline] echo 00:00:05.002 Sorcerer 10.211.164.20 is alive 00:00:05.011 [Pipeline] retry 00:00:05.014 [Pipeline] { 00:00:05.025 [Pipeline] httpRequest 00:00:05.028 HttpMethod: GET 00:00:05.029 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:05.029 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:05.036 Response Code: HTTP/1.1 200 OK 00:00:05.037 Success: Status code 200 is in the accepted range: 200,404 00:00:05.037 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:53.716 [Pipeline] } 00:00:53.734 [Pipeline] // retry 00:00:53.743 [Pipeline] sh 00:00:54.031 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:56.622 [Pipeline] sh 00:00:56.908 + git -C spdk log --oneline -n5 00:00:56.908 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:56.908 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:56.908 4bcab9fb9 correct kick for CQ full case 00:00:56.908 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:56.908 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:56.928 [Pipeline] writeFile 00:00:56.942 [Pipeline] sh 00:00:57.228 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:57.241 [Pipeline] sh 00:00:57.527 + cat autorun-spdk.conf 00:00:57.527 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.527 SPDK_RUN_ASAN=1 00:00:57.527 SPDK_RUN_UBSAN=1 00:00:57.527 SPDK_TEST_RAID=1 00:00:57.527 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.535 RUN_NIGHTLY=1 00:00:57.537 [Pipeline] } 00:00:57.550 [Pipeline] // stage 00:00:57.565 [Pipeline] stage 00:00:57.568 [Pipeline] { (Run VM) 00:00:57.592 [Pipeline] sh 00:00:57.878 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:57.878 + echo 'Start stage prepare_nvme.sh' 00:00:57.878 Start stage prepare_nvme.sh 00:00:57.878 + [[ -n 2 ]] 00:00:57.878 + disk_prefix=ex2 00:00:57.878 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:57.878 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:57.878 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:57.878 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.878 ++ SPDK_RUN_ASAN=1 00:00:57.878 ++ SPDK_RUN_UBSAN=1 00:00:57.878 ++ SPDK_TEST_RAID=1 00:00:57.878 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.878 ++ RUN_NIGHTLY=1 00:00:57.878 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:57.878 + nvme_files=() 00:00:57.878 + declare -A nvme_files 00:00:57.878 + backend_dir=/var/lib/libvirt/images/backends 00:00:57.878 + nvme_files['nvme.img']=5G 00:00:57.878 + nvme_files['nvme-cmb.img']=5G 00:00:57.878 + nvme_files['nvme-multi0.img']=4G 00:00:57.878 + nvme_files['nvme-multi1.img']=4G 00:00:57.878 + nvme_files['nvme-multi2.img']=4G 00:00:57.878 + nvme_files['nvme-openstack.img']=8G 00:00:57.878 + nvme_files['nvme-zns.img']=5G 00:00:57.878 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:57.878 + (( SPDK_TEST_FTL == 1 )) 00:00:57.878 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:57.878 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:57.878 + for nvme in "${!nvme_files[@]}" 00:00:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:57.878 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.878 + for nvme in "${!nvme_files[@]}" 00:00:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:57.878 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.878 + for nvme in "${!nvme_files[@]}" 00:00:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:57.878 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:57.878 + for nvme in "${!nvme_files[@]}" 00:00:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:57.878 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.878 + for nvme in "${!nvme_files[@]}" 00:00:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:57.878 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.878 + for nvme in "${!nvme_files[@]}" 00:00:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:57.878 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.878 + for nvme in "${!nvme_files[@]}" 00:00:57.878 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:58.139 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.139 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:58.139 + echo 'End stage prepare_nvme.sh' 00:00:58.139 End stage prepare_nvme.sh 00:00:58.151 [Pipeline] sh 00:00:58.436 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:58.437 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:58.437 00:00:58.437 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:58.437 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:58.437 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:58.437 HELP=0 00:00:58.437 DRY_RUN=0 00:00:58.437 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:58.437 NVME_DISKS_TYPE=nvme,nvme, 00:00:58.437 NVME_AUTO_CREATE=0 00:00:58.437 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:58.437 NVME_CMB=,, 00:00:58.437 NVME_PMR=,, 00:00:58.437 NVME_ZNS=,, 00:00:58.437 NVME_MS=,, 00:00:58.437 NVME_FDP=,, 00:00:58.437 SPDK_VAGRANT_DISTRO=fedora39 00:00:58.437 SPDK_VAGRANT_VMCPU=10 00:00:58.437 SPDK_VAGRANT_VMRAM=12288 00:00:58.437 SPDK_VAGRANT_PROVIDER=libvirt 00:00:58.437 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:58.437 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:58.437 SPDK_OPENSTACK_NETWORK=0 00:00:58.437 VAGRANT_PACKAGE_BOX=0 00:00:58.437 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:58.437 FORCE_DISTRO=true 00:00:58.437 VAGRANT_BOX_VERSION= 00:00:58.437 EXTRA_VAGRANTFILES= 00:00:58.437 NIC_MODEL=virtio 00:00:58.437 00:00:58.437 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:58.437 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:00.347 Bringing machine 'default' up with 'libvirt' provider... 00:01:00.917 ==> default: Creating image (snapshot of base box volume). 00:01:00.917 ==> default: Creating domain with the following settings... 00:01:00.917 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731901797_e5675a939cf4ee85c256 00:01:00.917 ==> default: -- Domain type: kvm 00:01:00.917 ==> default: -- Cpus: 10 00:01:00.917 ==> default: -- Feature: acpi 00:01:00.917 ==> default: -- Feature: apic 00:01:00.917 ==> default: -- Feature: pae 00:01:00.917 ==> default: -- Memory: 12288M 00:01:00.917 ==> default: -- Memory Backing: hugepages: 00:01:00.917 ==> default: -- Management MAC: 00:01:00.917 ==> default: -- Loader: 00:01:00.917 ==> default: -- Nvram: 00:01:00.917 ==> default: -- Base box: spdk/fedora39 00:01:00.917 ==> default: -- Storage pool: default 00:01:00.917 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731901797_e5675a939cf4ee85c256.img (20G) 00:01:00.917 ==> default: -- Volume Cache: default 00:01:00.917 ==> default: -- Kernel: 00:01:00.917 ==> default: -- Initrd: 00:01:00.917 ==> default: -- Graphics Type: vnc 00:01:00.917 ==> default: -- Graphics Port: -1 00:01:00.917 ==> default: -- Graphics IP: 127.0.0.1 00:01:00.917 ==> default: -- Graphics Password: Not defined 00:01:00.917 ==> default: -- Video Type: cirrus 00:01:00.917 ==> default: -- Video VRAM: 9216 00:01:00.917 ==> default: -- Sound Type: 00:01:00.917 ==> default: -- Keymap: en-us 00:01:00.917 ==> default: -- TPM Path: 00:01:00.917 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:00.917 ==> default: -- Command line args: 00:01:00.917 ==> default: -> value=-device, 00:01:00.917 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:00.917 ==> default: -> value=-drive, 00:01:00.917 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:00.917 ==> default: -> value=-device, 00:01:00.917 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.917 ==> default: -> value=-device, 00:01:00.917 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:00.917 ==> default: -> value=-drive, 00:01:00.917 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:00.917 ==> default: -> value=-device, 00:01:00.917 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.917 ==> default: -> value=-drive, 00:01:00.917 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:00.917 ==> default: -> value=-device, 00:01:00.917 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.917 ==> default: -> value=-drive, 00:01:00.917 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:00.917 ==> default: -> value=-device, 00:01:00.917 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.177 ==> default: Creating shared folders metadata... 00:01:01.177 ==> default: Starting domain. 00:01:03.084 ==> default: Waiting for domain to get an IP address... 00:01:18.030 ==> default: Waiting for SSH to become available... 00:01:19.413 ==> default: Configuring and enabling network interfaces... 00:01:25.990 default: SSH address: 192.168.121.218:22 00:01:25.990 default: SSH username: vagrant 00:01:25.990 default: SSH auth method: private key 00:01:29.284 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:37.415 ==> default: Mounting SSHFS shared folder... 00:01:39.327 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:39.327 ==> default: Checking Mount.. 00:01:41.237 ==> default: Folder Successfully Mounted! 00:01:41.237 ==> default: Running provisioner: file... 00:01:42.178 default: ~/.gitconfig => .gitconfig 00:01:42.748 00:01:42.748 SUCCESS! 00:01:42.748 00:01:42.748 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:42.748 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:42.748 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:42.748 00:01:42.756 [Pipeline] } 00:01:42.767 [Pipeline] // stage 00:01:42.773 [Pipeline] dir 00:01:42.774 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:42.775 [Pipeline] { 00:01:42.783 [Pipeline] catchError 00:01:42.785 [Pipeline] { 00:01:42.794 [Pipeline] sh 00:01:43.074 + vagrant ssh-config --host vagrant 00:01:43.074 + sed -ne /^Host/,$p 00:01:43.074 + tee ssh_conf 00:01:45.621 Host vagrant 00:01:45.621 HostName 192.168.121.218 00:01:45.621 User vagrant 00:01:45.621 Port 22 00:01:45.621 UserKnownHostsFile /dev/null 00:01:45.621 StrictHostKeyChecking no 00:01:45.621 PasswordAuthentication no 00:01:45.621 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:45.621 IdentitiesOnly yes 00:01:45.621 LogLevel FATAL 00:01:45.621 ForwardAgent yes 00:01:45.621 ForwardX11 yes 00:01:45.621 00:01:45.637 [Pipeline] withEnv 00:01:45.639 [Pipeline] { 00:01:45.653 [Pipeline] sh 00:01:45.937 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:45.937 source /etc/os-release 00:01:45.937 [[ -e /image.version ]] && img=$(< /image.version) 00:01:45.937 # Minimal, systemd-like check. 00:01:45.937 if [[ -e /.dockerenv ]]; then 00:01:45.937 # Clear garbage from the node's name: 00:01:45.937 # agt-er_autotest_547-896 -> autotest_547-896 00:01:45.937 # $HOSTNAME is the actual container id 00:01:45.937 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:45.937 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:45.937 # We can assume this is a mount from a host where container is running, 00:01:45.937 # so fetch its hostname to easily identify the target swarm worker. 00:01:45.937 container="$(< /etc/hostname) ($agent)" 00:01:45.937 else 00:01:45.937 # Fallback 00:01:45.937 container=$agent 00:01:45.937 fi 00:01:45.937 fi 00:01:45.937 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:45.937 00:01:46.209 [Pipeline] } 00:01:46.226 [Pipeline] // withEnv 00:01:46.234 [Pipeline] setCustomBuildProperty 00:01:46.249 [Pipeline] stage 00:01:46.251 [Pipeline] { (Tests) 00:01:46.268 [Pipeline] sh 00:01:46.552 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:46.827 [Pipeline] sh 00:01:47.112 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:47.389 [Pipeline] timeout 00:01:47.390 Timeout set to expire in 1 hr 30 min 00:01:47.392 [Pipeline] { 00:01:47.407 [Pipeline] sh 00:01:47.690 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:48.260 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:48.275 [Pipeline] sh 00:01:48.560 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:48.836 [Pipeline] sh 00:01:49.121 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:49.398 [Pipeline] sh 00:01:49.681 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:49.942 ++ readlink -f spdk_repo 00:01:49.942 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:49.942 + [[ -n /home/vagrant/spdk_repo ]] 00:01:49.942 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:49.942 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:49.942 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:49.942 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:49.942 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:49.942 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:49.942 + cd /home/vagrant/spdk_repo 00:01:49.942 + source /etc/os-release 00:01:49.942 ++ NAME='Fedora Linux' 00:01:49.942 ++ VERSION='39 (Cloud Edition)' 00:01:49.942 ++ ID=fedora 00:01:49.942 ++ VERSION_ID=39 00:01:49.942 ++ VERSION_CODENAME= 00:01:49.942 ++ PLATFORM_ID=platform:f39 00:01:49.942 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:49.942 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:49.942 ++ LOGO=fedora-logo-icon 00:01:49.942 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:49.942 ++ HOME_URL=https://fedoraproject.org/ 00:01:49.942 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:49.942 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:49.942 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:49.942 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:49.942 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:49.942 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:49.942 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:49.942 ++ SUPPORT_END=2024-11-12 00:01:49.942 ++ VARIANT='Cloud Edition' 00:01:49.942 ++ VARIANT_ID=cloud 00:01:49.942 + uname -a 00:01:49.942 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:49.942 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:50.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:50.511 Hugepages 00:01:50.511 node hugesize free / total 00:01:50.511 node0 1048576kB 0 / 0 00:01:50.511 node0 2048kB 0 / 0 00:01:50.511 00:01:50.511 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:50.511 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:50.511 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:50.511 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:50.511 + rm -f /tmp/spdk-ld-path 00:01:50.511 + source autorun-spdk.conf 00:01:50.511 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.511 ++ SPDK_RUN_ASAN=1 00:01:50.511 ++ SPDK_RUN_UBSAN=1 00:01:50.511 ++ SPDK_TEST_RAID=1 00:01:50.511 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.511 ++ RUN_NIGHTLY=1 00:01:50.511 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:50.511 + [[ -n '' ]] 00:01:50.511 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:50.511 + for M in /var/spdk/build-*-manifest.txt 00:01:50.511 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:50.511 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.511 + for M in /var/spdk/build-*-manifest.txt 00:01:50.511 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:50.511 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.511 + for M in /var/spdk/build-*-manifest.txt 00:01:50.511 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:50.511 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:50.771 ++ uname 00:01:50.771 + [[ Linux == \L\i\n\u\x ]] 00:01:50.771 + sudo dmesg -T 00:01:50.771 + sudo dmesg --clear 00:01:50.771 + dmesg_pid=5433 00:01:50.771 + [[ Fedora Linux == FreeBSD ]] 00:01:50.771 + sudo dmesg -Tw 00:01:50.771 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.771 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:50.771 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:50.771 + [[ -x /usr/src/fio-static/fio ]] 00:01:50.771 + export FIO_BIN=/usr/src/fio-static/fio 00:01:50.771 + FIO_BIN=/usr/src/fio-static/fio 00:01:50.771 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:50.771 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:50.771 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:50.771 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.771 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:50.771 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:50.771 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.771 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:50.771 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:50.771 03:50:47 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:50.771 03:50:47 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:50.771 03:50:47 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.771 03:50:47 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:50.771 03:50:47 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:50.771 03:50:47 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:50.771 03:50:47 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:50.771 03:50:47 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:01:50.771 03:50:47 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:50.771 03:50:47 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:51.038 03:50:47 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:51.038 03:50:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:51.038 03:50:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:51.038 03:50:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:51.038 03:50:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:51.038 03:50:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:51.038 03:50:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.038 03:50:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.038 03:50:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.038 03:50:47 -- paths/export.sh@5 -- $ export PATH 00:01:51.038 03:50:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.038 03:50:47 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:51.038 03:50:47 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:51.038 03:50:47 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731901847.XXXXXX 00:01:51.038 03:50:47 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731901847.HxHU3b 00:01:51.038 03:50:47 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:51.038 03:50:47 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:51.038 03:50:47 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:51.038 03:50:47 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:51.038 03:50:47 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:51.038 03:50:47 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:51.038 03:50:47 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:51.038 03:50:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.038 03:50:47 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:51.038 03:50:47 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:51.038 03:50:47 -- pm/common@17 -- $ local monitor 00:01:51.038 03:50:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.038 03:50:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.038 03:50:47 -- pm/common@21 -- $ date +%s 00:01:51.038 03:50:47 -- pm/common@25 -- $ sleep 1 00:01:51.038 03:50:47 -- pm/common@21 -- $ date +%s 00:01:51.038 03:50:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731901847 00:01:51.038 03:50:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731901847 00:01:51.038 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731901847_collect-cpu-load.pm.log 00:01:51.038 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731901847_collect-vmstat.pm.log 00:01:52.005 03:50:48 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:52.005 03:50:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:52.005 03:50:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:52.005 03:50:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:52.005 03:50:48 -- spdk/autobuild.sh@16 -- $ date -u 00:01:52.005 Mon Nov 18 03:50:48 AM UTC 2024 00:01:52.005 03:50:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:52.005 v25.01-pre-189-g83e8405e4 00:01:52.005 03:50:48 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:52.005 03:50:48 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:52.005 03:50:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:52.005 03:50:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:52.005 03:50:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.005 ************************************ 00:01:52.005 START TEST asan 00:01:52.005 ************************************ 00:01:52.005 using asan 00:01:52.005 03:50:48 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:52.005 00:01:52.005 real 0m0.000s 00:01:52.005 user 0m0.000s 00:01:52.005 sys 0m0.000s 00:01:52.005 03:50:48 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:52.005 03:50:48 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:52.005 ************************************ 00:01:52.005 END TEST asan 00:01:52.005 ************************************ 00:01:52.005 03:50:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:52.005 03:50:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:52.005 03:50:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:52.005 03:50:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:52.005 03:50:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.005 ************************************ 00:01:52.005 START TEST ubsan 00:01:52.005 ************************************ 00:01:52.005 using ubsan 00:01:52.005 03:50:48 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:52.005 00:01:52.005 real 0m0.000s 00:01:52.005 user 0m0.000s 00:01:52.005 sys 0m0.000s 00:01:52.005 03:50:48 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:52.005 03:50:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:52.005 ************************************ 00:01:52.005 END TEST ubsan 00:01:52.005 ************************************ 00:01:52.266 03:50:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:52.266 03:50:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:52.266 03:50:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:52.266 03:50:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:52.266 03:50:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:52.266 03:50:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:52.266 03:50:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:52.266 03:50:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:52.266 03:50:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:52.266 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:52.266 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:52.836 Using 'verbs' RDMA provider 00:02:08.678 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:23.581 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:24.151 Creating mk/config.mk...done. 00:02:24.151 Creating mk/cc.flags.mk...done. 00:02:24.151 Type 'make' to build. 00:02:24.151 03:51:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:24.151 03:51:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:24.151 03:51:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:24.151 03:51:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.151 ************************************ 00:02:24.151 START TEST make 00:02:24.151 ************************************ 00:02:24.151 03:51:20 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:24.722 make[1]: Nothing to be done for 'all'. 00:02:34.721 The Meson build system 00:02:34.721 Version: 1.5.0 00:02:34.721 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:34.721 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:34.721 Build type: native build 00:02:34.721 Program cat found: YES (/usr/bin/cat) 00:02:34.721 Project name: DPDK 00:02:34.721 Project version: 24.03.0 00:02:34.721 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.721 C linker for the host machine: cc ld.bfd 2.40-14 00:02:34.721 Host machine cpu family: x86_64 00:02:34.721 Host machine cpu: x86_64 00:02:34.721 Message: ## Building in Developer Mode ## 00:02:34.721 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.721 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:34.721 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.721 Program python3 found: YES (/usr/bin/python3) 00:02:34.721 Program cat found: YES (/usr/bin/cat) 00:02:34.721 Compiler for C supports arguments -march=native: YES 00:02:34.721 Checking for size of "void *" : 8 00:02:34.721 Checking for size of "void *" : 8 (cached) 00:02:34.721 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:34.721 Library m found: YES 00:02:34.721 Library numa found: YES 00:02:34.721 Has header "numaif.h" : YES 00:02:34.721 Library fdt found: NO 00:02:34.721 Library execinfo found: NO 00:02:34.721 Has header "execinfo.h" : YES 00:02:34.721 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.721 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.721 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.722 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.722 Run-time dependency openssl found: YES 3.1.1 00:02:34.722 Run-time dependency libpcap found: YES 1.10.4 00:02:34.722 Has header "pcap.h" with dependency libpcap: YES 00:02:34.722 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.722 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.722 Compiler for C supports arguments -Wformat: YES 00:02:34.722 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:34.722 Compiler for C supports arguments -Wformat-security: NO 00:02:34.722 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.722 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.722 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.722 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.722 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.722 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.722 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.722 Compiler for C supports arguments -Wundef: YES 00:02:34.722 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.722 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.722 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.722 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.722 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.722 Program objdump found: YES (/usr/bin/objdump) 00:02:34.722 Compiler for C supports arguments -mavx512f: YES 00:02:34.722 Checking if "AVX512 checking" compiles: YES 00:02:34.722 Fetching value of define "__SSE4_2__" : 1 00:02:34.722 Fetching value of define "__AES__" : 1 00:02:34.722 Fetching value of define "__AVX__" : 1 00:02:34.722 Fetching value of define "__AVX2__" : 1 00:02:34.722 Fetching value of define "__AVX512BW__" : 1 00:02:34.722 Fetching value of define "__AVX512CD__" : 1 00:02:34.722 Fetching value of define "__AVX512DQ__" : 1 00:02:34.722 Fetching value of define "__AVX512F__" : 1 00:02:34.722 Fetching value of define "__AVX512VL__" : 1 00:02:34.722 Fetching value of define "__PCLMUL__" : 1 00:02:34.722 Fetching value of define "__RDRND__" : 1 00:02:34.722 Fetching value of define "__RDSEED__" : 1 00:02:34.722 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.722 Fetching value of define "__znver1__" : (undefined) 00:02:34.722 Fetching value of define "__znver2__" : (undefined) 00:02:34.722 Fetching value of define "__znver3__" : (undefined) 00:02:34.722 Fetching value of define "__znver4__" : (undefined) 00:02:34.722 Library asan found: YES 00:02:34.722 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.722 Message: lib/log: Defining dependency "log" 00:02:34.722 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.722 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.722 Library rt found: YES 00:02:34.722 Checking for function "getentropy" : NO 00:02:34.722 Message: lib/eal: Defining dependency "eal" 00:02:34.722 Message: lib/ring: Defining dependency "ring" 00:02:34.722 Message: lib/rcu: Defining dependency "rcu" 00:02:34.722 Message: lib/mempool: Defining dependency "mempool" 00:02:34.722 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.722 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.722 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.722 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.722 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.722 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:34.722 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:34.722 Compiler for C supports arguments -mpclmul: YES 00:02:34.722 Compiler for C supports arguments -maes: YES 00:02:34.722 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.722 Compiler for C supports arguments -mavx512bw: YES 00:02:34.722 Compiler for C supports arguments -mavx512dq: YES 00:02:34.722 Compiler for C supports arguments -mavx512vl: YES 00:02:34.722 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.722 Compiler for C supports arguments -mavx2: YES 00:02:34.722 Compiler for C supports arguments -mavx: YES 00:02:34.722 Message: lib/net: Defining dependency "net" 00:02:34.722 Message: lib/meter: Defining dependency "meter" 00:02:34.722 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.722 Message: lib/pci: Defining dependency "pci" 00:02:34.722 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.722 Message: lib/hash: Defining dependency "hash" 00:02:34.722 Message: lib/timer: Defining dependency "timer" 00:02:34.722 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.722 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.722 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.722 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.722 Message: lib/power: Defining dependency "power" 00:02:34.722 Message: lib/reorder: Defining dependency "reorder" 00:02:34.722 Message: lib/security: Defining dependency "security" 00:02:34.722 Has header "linux/userfaultfd.h" : YES 00:02:34.722 Has header "linux/vduse.h" : YES 00:02:34.722 Message: lib/vhost: Defining dependency "vhost" 00:02:34.722 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.722 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.722 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.722 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.722 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.722 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.722 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.722 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.722 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.722 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.722 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:34.722 Configuring doxy-api-html.conf using configuration 00:02:34.722 Configuring doxy-api-man.conf using configuration 00:02:34.722 Program mandb found: YES (/usr/bin/mandb) 00:02:34.722 Program sphinx-build found: NO 00:02:34.722 Configuring rte_build_config.h using configuration 00:02:34.722 Message: 00:02:34.722 ================= 00:02:34.722 Applications Enabled 00:02:34.722 ================= 00:02:34.722 00:02:34.722 apps: 00:02:34.722 00:02:34.722 00:02:34.722 Message: 00:02:34.722 ================= 00:02:34.722 Libraries Enabled 00:02:34.722 ================= 00:02:34.722 00:02:34.722 libs: 00:02:34.722 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.722 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.722 cryptodev, dmadev, power, reorder, security, vhost, 00:02:34.722 00:02:34.722 Message: 00:02:34.722 =============== 00:02:34.722 Drivers Enabled 00:02:34.722 =============== 00:02:34.722 00:02:34.722 common: 00:02:34.722 00:02:34.722 bus: 00:02:34.722 pci, vdev, 00:02:34.722 mempool: 00:02:34.722 ring, 00:02:34.722 dma: 00:02:34.722 00:02:34.722 net: 00:02:34.722 00:02:34.722 crypto: 00:02:34.722 00:02:34.722 compress: 00:02:34.722 00:02:34.722 vdpa: 00:02:34.722 00:02:34.722 00:02:34.722 Message: 00:02:34.722 ================= 00:02:34.722 Content Skipped 00:02:34.722 ================= 00:02:34.722 00:02:34.722 apps: 00:02:34.722 dumpcap: explicitly disabled via build config 00:02:34.722 graph: explicitly disabled via build config 00:02:34.722 pdump: explicitly disabled via build config 00:02:34.722 proc-info: explicitly disabled via build config 00:02:34.722 test-acl: explicitly disabled via build config 00:02:34.722 test-bbdev: explicitly disabled via build config 00:02:34.722 test-cmdline: explicitly disabled via build config 00:02:34.722 test-compress-perf: explicitly disabled via build config 00:02:34.722 test-crypto-perf: explicitly disabled via build config 00:02:34.722 test-dma-perf: explicitly disabled via build config 00:02:34.722 test-eventdev: explicitly disabled via build config 00:02:34.722 test-fib: explicitly disabled via build config 00:02:34.722 test-flow-perf: explicitly disabled via build config 00:02:34.722 test-gpudev: explicitly disabled via build config 00:02:34.722 test-mldev: explicitly disabled via build config 00:02:34.722 test-pipeline: explicitly disabled via build config 00:02:34.722 test-pmd: explicitly disabled via build config 00:02:34.722 test-regex: explicitly disabled via build config 00:02:34.722 test-sad: explicitly disabled via build config 00:02:34.722 test-security-perf: explicitly disabled via build config 00:02:34.722 00:02:34.722 libs: 00:02:34.722 argparse: explicitly disabled via build config 00:02:34.722 metrics: explicitly disabled via build config 00:02:34.722 acl: explicitly disabled via build config 00:02:34.722 bbdev: explicitly disabled via build config 00:02:34.722 bitratestats: explicitly disabled via build config 00:02:34.722 bpf: explicitly disabled via build config 00:02:34.722 cfgfile: explicitly disabled via build config 00:02:34.722 distributor: explicitly disabled via build config 00:02:34.722 efd: explicitly disabled via build config 00:02:34.722 eventdev: explicitly disabled via build config 00:02:34.722 dispatcher: explicitly disabled via build config 00:02:34.722 gpudev: explicitly disabled via build config 00:02:34.722 gro: explicitly disabled via build config 00:02:34.722 gso: explicitly disabled via build config 00:02:34.722 ip_frag: explicitly disabled via build config 00:02:34.722 jobstats: explicitly disabled via build config 00:02:34.722 latencystats: explicitly disabled via build config 00:02:34.722 lpm: explicitly disabled via build config 00:02:34.722 member: explicitly disabled via build config 00:02:34.722 pcapng: explicitly disabled via build config 00:02:34.722 rawdev: explicitly disabled via build config 00:02:34.722 regexdev: explicitly disabled via build config 00:02:34.722 mldev: explicitly disabled via build config 00:02:34.722 rib: explicitly disabled via build config 00:02:34.722 sched: explicitly disabled via build config 00:02:34.722 stack: explicitly disabled via build config 00:02:34.723 ipsec: explicitly disabled via build config 00:02:34.723 pdcp: explicitly disabled via build config 00:02:34.723 fib: explicitly disabled via build config 00:02:34.723 port: explicitly disabled via build config 00:02:34.723 pdump: explicitly disabled via build config 00:02:34.723 table: explicitly disabled via build config 00:02:34.723 pipeline: explicitly disabled via build config 00:02:34.723 graph: explicitly disabled via build config 00:02:34.723 node: explicitly disabled via build config 00:02:34.723 00:02:34.723 drivers: 00:02:34.723 common/cpt: not in enabled drivers build config 00:02:34.723 common/dpaax: not in enabled drivers build config 00:02:34.723 common/iavf: not in enabled drivers build config 00:02:34.723 common/idpf: not in enabled drivers build config 00:02:34.723 common/ionic: not in enabled drivers build config 00:02:34.723 common/mvep: not in enabled drivers build config 00:02:34.723 common/octeontx: not in enabled drivers build config 00:02:34.723 bus/auxiliary: not in enabled drivers build config 00:02:34.723 bus/cdx: not in enabled drivers build config 00:02:34.723 bus/dpaa: not in enabled drivers build config 00:02:34.723 bus/fslmc: not in enabled drivers build config 00:02:34.723 bus/ifpga: not in enabled drivers build config 00:02:34.723 bus/platform: not in enabled drivers build config 00:02:34.723 bus/uacce: not in enabled drivers build config 00:02:34.723 bus/vmbus: not in enabled drivers build config 00:02:34.723 common/cnxk: not in enabled drivers build config 00:02:34.723 common/mlx5: not in enabled drivers build config 00:02:34.723 common/nfp: not in enabled drivers build config 00:02:34.723 common/nitrox: not in enabled drivers build config 00:02:34.723 common/qat: not in enabled drivers build config 00:02:34.723 common/sfc_efx: not in enabled drivers build config 00:02:34.723 mempool/bucket: not in enabled drivers build config 00:02:34.723 mempool/cnxk: not in enabled drivers build config 00:02:34.723 mempool/dpaa: not in enabled drivers build config 00:02:34.723 mempool/dpaa2: not in enabled drivers build config 00:02:34.723 mempool/octeontx: not in enabled drivers build config 00:02:34.723 mempool/stack: not in enabled drivers build config 00:02:34.723 dma/cnxk: not in enabled drivers build config 00:02:34.723 dma/dpaa: not in enabled drivers build config 00:02:34.723 dma/dpaa2: not in enabled drivers build config 00:02:34.723 dma/hisilicon: not in enabled drivers build config 00:02:34.723 dma/idxd: not in enabled drivers build config 00:02:34.723 dma/ioat: not in enabled drivers build config 00:02:34.723 dma/skeleton: not in enabled drivers build config 00:02:34.723 net/af_packet: not in enabled drivers build config 00:02:34.723 net/af_xdp: not in enabled drivers build config 00:02:34.723 net/ark: not in enabled drivers build config 00:02:34.723 net/atlantic: not in enabled drivers build config 00:02:34.723 net/avp: not in enabled drivers build config 00:02:34.723 net/axgbe: not in enabled drivers build config 00:02:34.723 net/bnx2x: not in enabled drivers build config 00:02:34.723 net/bnxt: not in enabled drivers build config 00:02:34.723 net/bonding: not in enabled drivers build config 00:02:34.723 net/cnxk: not in enabled drivers build config 00:02:34.723 net/cpfl: not in enabled drivers build config 00:02:34.723 net/cxgbe: not in enabled drivers build config 00:02:34.723 net/dpaa: not in enabled drivers build config 00:02:34.723 net/dpaa2: not in enabled drivers build config 00:02:34.723 net/e1000: not in enabled drivers build config 00:02:34.723 net/ena: not in enabled drivers build config 00:02:34.723 net/enetc: not in enabled drivers build config 00:02:34.723 net/enetfec: not in enabled drivers build config 00:02:34.723 net/enic: not in enabled drivers build config 00:02:34.723 net/failsafe: not in enabled drivers build config 00:02:34.723 net/fm10k: not in enabled drivers build config 00:02:34.723 net/gve: not in enabled drivers build config 00:02:34.723 net/hinic: not in enabled drivers build config 00:02:34.723 net/hns3: not in enabled drivers build config 00:02:34.723 net/i40e: not in enabled drivers build config 00:02:34.723 net/iavf: not in enabled drivers build config 00:02:34.723 net/ice: not in enabled drivers build config 00:02:34.723 net/idpf: not in enabled drivers build config 00:02:34.723 net/igc: not in enabled drivers build config 00:02:34.723 net/ionic: not in enabled drivers build config 00:02:34.723 net/ipn3ke: not in enabled drivers build config 00:02:34.723 net/ixgbe: not in enabled drivers build config 00:02:34.723 net/mana: not in enabled drivers build config 00:02:34.723 net/memif: not in enabled drivers build config 00:02:34.723 net/mlx4: not in enabled drivers build config 00:02:34.723 net/mlx5: not in enabled drivers build config 00:02:34.723 net/mvneta: not in enabled drivers build config 00:02:34.723 net/mvpp2: not in enabled drivers build config 00:02:34.723 net/netvsc: not in enabled drivers build config 00:02:34.723 net/nfb: not in enabled drivers build config 00:02:34.723 net/nfp: not in enabled drivers build config 00:02:34.723 net/ngbe: not in enabled drivers build config 00:02:34.723 net/null: not in enabled drivers build config 00:02:34.723 net/octeontx: not in enabled drivers build config 00:02:34.723 net/octeon_ep: not in enabled drivers build config 00:02:34.723 net/pcap: not in enabled drivers build config 00:02:34.723 net/pfe: not in enabled drivers build config 00:02:34.723 net/qede: not in enabled drivers build config 00:02:34.723 net/ring: not in enabled drivers build config 00:02:34.723 net/sfc: not in enabled drivers build config 00:02:34.723 net/softnic: not in enabled drivers build config 00:02:34.723 net/tap: not in enabled drivers build config 00:02:34.723 net/thunderx: not in enabled drivers build config 00:02:34.723 net/txgbe: not in enabled drivers build config 00:02:34.723 net/vdev_netvsc: not in enabled drivers build config 00:02:34.723 net/vhost: not in enabled drivers build config 00:02:34.723 net/virtio: not in enabled drivers build config 00:02:34.723 net/vmxnet3: not in enabled drivers build config 00:02:34.723 raw/*: missing internal dependency, "rawdev" 00:02:34.723 crypto/armv8: not in enabled drivers build config 00:02:34.723 crypto/bcmfs: not in enabled drivers build config 00:02:34.723 crypto/caam_jr: not in enabled drivers build config 00:02:34.723 crypto/ccp: not in enabled drivers build config 00:02:34.723 crypto/cnxk: not in enabled drivers build config 00:02:34.723 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.723 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.723 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.723 crypto/mlx5: not in enabled drivers build config 00:02:34.723 crypto/mvsam: not in enabled drivers build config 00:02:34.723 crypto/nitrox: not in enabled drivers build config 00:02:34.723 crypto/null: not in enabled drivers build config 00:02:34.723 crypto/octeontx: not in enabled drivers build config 00:02:34.723 crypto/openssl: not in enabled drivers build config 00:02:34.723 crypto/scheduler: not in enabled drivers build config 00:02:34.723 crypto/uadk: not in enabled drivers build config 00:02:34.723 crypto/virtio: not in enabled drivers build config 00:02:34.723 compress/isal: not in enabled drivers build config 00:02:34.723 compress/mlx5: not in enabled drivers build config 00:02:34.723 compress/nitrox: not in enabled drivers build config 00:02:34.723 compress/octeontx: not in enabled drivers build config 00:02:34.723 compress/zlib: not in enabled drivers build config 00:02:34.723 regex/*: missing internal dependency, "regexdev" 00:02:34.723 ml/*: missing internal dependency, "mldev" 00:02:34.723 vdpa/ifc: not in enabled drivers build config 00:02:34.723 vdpa/mlx5: not in enabled drivers build config 00:02:34.723 vdpa/nfp: not in enabled drivers build config 00:02:34.723 vdpa/sfc: not in enabled drivers build config 00:02:34.723 event/*: missing internal dependency, "eventdev" 00:02:34.723 baseband/*: missing internal dependency, "bbdev" 00:02:34.723 gpu/*: missing internal dependency, "gpudev" 00:02:34.723 00:02:34.723 00:02:34.983 Build targets in project: 85 00:02:34.983 00:02:34.983 DPDK 24.03.0 00:02:34.983 00:02:34.983 User defined options 00:02:34.983 buildtype : debug 00:02:34.983 default_library : shared 00:02:34.983 libdir : lib 00:02:34.983 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:34.983 b_sanitize : address 00:02:34.983 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:34.983 c_link_args : 00:02:34.983 cpu_instruction_set: native 00:02:34.983 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:34.983 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:34.983 enable_docs : false 00:02:34.983 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:34.983 enable_kmods : false 00:02:34.983 max_lcores : 128 00:02:34.983 tests : false 00:02:34.983 00:02:34.983 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:35.554 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:35.554 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:35.814 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:35.814 [3/268] Linking static target lib/librte_kvargs.a 00:02:35.814 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:35.814 [5/268] Linking static target lib/librte_log.a 00:02:35.814 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:36.074 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:36.074 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.074 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:36.074 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:36.075 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:36.075 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:36.334 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:36.334 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:36.334 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:36.334 [16/268] Linking static target lib/librte_telemetry.a 00:02:36.334 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:36.334 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:36.595 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.855 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:36.855 [21/268] Linking target lib/librte_log.so.24.1 00:02:36.855 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:36.855 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:36.855 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:36.855 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:36.855 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:36.855 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:36.855 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:37.115 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:37.115 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:37.115 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:37.115 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:37.115 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.375 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:37.375 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:37.375 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:37.375 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:37.375 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:37.375 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.375 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:37.375 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:37.635 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:37.635 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:37.635 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:37.635 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:37.896 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:37.896 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:37.896 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:37.896 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.156 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.156 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:38.156 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:38.156 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.156 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:38.156 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:38.417 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:38.417 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:38.417 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:38.417 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:38.676 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:38.676 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:38.676 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:38.676 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:38.676 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:38.936 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:38.936 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:38.936 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:38.936 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:39.196 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:39.196 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:39.196 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:39.196 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:39.196 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:39.456 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:39.456 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:39.456 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:39.456 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:39.456 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:39.456 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:39.715 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:39.715 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:39.715 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:39.715 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:39.715 [84/268] Linking static target lib/librte_ring.a 00:02:39.715 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.715 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:39.974 [87/268] Linking static target lib/librte_eal.a 00:02:39.974 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:39.974 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:39.974 [90/268] Linking static target lib/librte_rcu.a 00:02:39.974 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:40.233 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:40.233 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.233 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:40.233 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:40.233 [96/268] Linking static target lib/librte_mempool.a 00:02:40.233 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:40.493 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.493 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:40.493 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:40.493 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.753 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:40.753 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:40.753 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:40.753 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:41.013 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:41.013 [107/268] Linking static target lib/librte_net.a 00:02:41.013 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:41.013 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:41.013 [110/268] Linking static target lib/librte_mbuf.a 00:02:41.013 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:41.013 [112/268] Linking static target lib/librte_meter.a 00:02:41.271 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:41.272 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:41.272 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:41.272 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.272 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.536 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:41.536 [119/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.536 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:41.808 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:41.808 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.068 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:42.068 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.068 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.068 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.068 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:42.068 [128/268] Linking static target lib/librte_pci.a 00:02:42.328 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:42.328 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:42.328 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.328 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:42.328 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.587 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.587 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:42.587 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.587 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.587 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.587 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.587 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.587 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:42.587 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:42.587 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:42.587 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:42.587 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:42.845 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.845 [147/268] Linking static target lib/librte_cmdline.a 00:02:43.104 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:43.104 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:43.104 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:43.104 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:43.363 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:43.363 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.363 [154/268] Linking static target lib/librte_timer.a 00:02:43.363 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:43.622 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:43.622 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:43.881 [158/268] Linking static target lib/librte_compressdev.a 00:02:43.881 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:43.881 [160/268] Linking static target lib/librte_hash.a 00:02:43.881 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:43.881 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:43.881 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:43.881 [164/268] Linking static target lib/librte_ethdev.a 00:02:43.881 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:44.140 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:44.140 [167/268] Linking static target lib/librte_dmadev.a 00:02:44.140 [168/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.140 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:44.400 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:44.400 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.400 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:44.400 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:44.659 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:44.659 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.659 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:44.919 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.919 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:44.919 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:44.919 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:44.919 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:44.919 [182/268] Linking static target lib/librte_cryptodev.a 00:02:44.919 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.919 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:45.179 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:45.179 [186/268] Linking static target lib/librte_power.a 00:02:45.440 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:45.440 [188/268] Linking static target lib/librte_reorder.a 00:02:45.440 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:45.440 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:45.440 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:45.700 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:45.700 [193/268] Linking static target lib/librte_security.a 00:02:45.961 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.961 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:46.221 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.480 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:46.480 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.480 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:46.480 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:46.480 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:46.739 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:46.998 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:46.998 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:46.998 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:46.998 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:46.998 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:46.998 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:47.257 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:47.257 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:47.257 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.257 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:47.517 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.517 [214/268] Linking static target drivers/librte_bus_pci.a 00:02:47.517 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.517 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:47.517 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.517 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.517 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:47.776 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:47.776 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:47.776 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.776 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:47.776 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.776 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:47.776 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:48.036 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.972 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.392 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.392 [230/268] Linking target lib/librte_eal.so.24.1 00:02:50.392 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:50.651 [232/268] Linking target lib/librte_ring.so.24.1 00:02:50.651 [233/268] Linking target lib/librte_pci.so.24.1 00:02:50.651 [234/268] Linking target lib/librte_timer.so.24.1 00:02:50.651 [235/268] Linking target lib/librte_meter.so.24.1 00:02:50.651 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:50.651 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:50.651 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:50.651 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:50.651 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:50.651 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:50.651 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:50.651 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:50.651 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:50.651 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:50.909 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:50.909 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:50.909 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:50.909 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:51.169 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:51.169 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:02:51.170 [252/268] Linking target lib/librte_net.so.24.1 00:02:51.170 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:51.170 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:51.170 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:51.170 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:51.170 [257/268] Linking target lib/librte_hash.so.24.1 00:02:51.170 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:51.170 [259/268] Linking target lib/librte_security.so.24.1 00:02:51.430 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:52.810 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:52.810 [262/268] Linking static target lib/librte_vhost.a 00:02:52.810 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.810 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:53.068 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:53.068 [266/268] Linking target lib/librte_power.so.24.1 00:02:55.607 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.607 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:55.607 INFO: autodetecting backend as ninja 00:02:55.607 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:13.699 CC lib/ut/ut.o 00:03:13.699 CC lib/log/log.o 00:03:13.699 CC lib/ut_mock/mock.o 00:03:13.699 CC lib/log/log_deprecated.o 00:03:13.699 CC lib/log/log_flags.o 00:03:13.699 LIB libspdk_log.a 00:03:13.699 LIB libspdk_ut.a 00:03:13.699 LIB libspdk_ut_mock.a 00:03:13.699 SO libspdk_ut.so.2.0 00:03:13.699 SO libspdk_log.so.7.1 00:03:13.699 SO libspdk_ut_mock.so.6.0 00:03:13.699 SYMLINK libspdk_ut.so 00:03:13.699 SYMLINK libspdk_log.so 00:03:13.699 SYMLINK libspdk_ut_mock.so 00:03:13.699 CC lib/dma/dma.o 00:03:13.699 CC lib/ioat/ioat.o 00:03:13.699 CC lib/util/base64.o 00:03:13.699 CC lib/util/bit_array.o 00:03:13.699 CC lib/util/crc16.o 00:03:13.699 CC lib/util/cpuset.o 00:03:13.699 CXX lib/trace_parser/trace.o 00:03:13.699 CC lib/util/crc32.o 00:03:13.699 CC lib/util/crc32c.o 00:03:13.699 CC lib/vfio_user/host/vfio_user_pci.o 00:03:13.699 CC lib/util/crc32_ieee.o 00:03:13.699 CC lib/util/crc64.o 00:03:13.699 CC lib/util/dif.o 00:03:13.699 LIB libspdk_dma.a 00:03:13.699 CC lib/vfio_user/host/vfio_user.o 00:03:13.699 SO libspdk_dma.so.5.0 00:03:13.699 CC lib/util/fd.o 00:03:13.699 CC lib/util/fd_group.o 00:03:13.699 SYMLINK libspdk_dma.so 00:03:13.699 CC lib/util/file.o 00:03:13.699 CC lib/util/hexlify.o 00:03:13.699 CC lib/util/iov.o 00:03:13.699 LIB libspdk_ioat.a 00:03:13.699 CC lib/util/math.o 00:03:13.699 SO libspdk_ioat.so.7.0 00:03:13.699 CC lib/util/net.o 00:03:13.699 SYMLINK libspdk_ioat.so 00:03:13.699 CC lib/util/pipe.o 00:03:13.699 LIB libspdk_vfio_user.a 00:03:13.699 CC lib/util/strerror_tls.o 00:03:13.699 CC lib/util/string.o 00:03:13.699 SO libspdk_vfio_user.so.5.0 00:03:13.699 CC lib/util/uuid.o 00:03:13.699 CC lib/util/xor.o 00:03:13.699 CC lib/util/zipf.o 00:03:13.699 SYMLINK libspdk_vfio_user.so 00:03:13.699 CC lib/util/md5.o 00:03:13.699 LIB libspdk_util.a 00:03:13.699 SO libspdk_util.so.10.1 00:03:13.699 LIB libspdk_trace_parser.a 00:03:13.699 SYMLINK libspdk_util.so 00:03:13.699 SO libspdk_trace_parser.so.6.0 00:03:13.699 SYMLINK libspdk_trace_parser.so 00:03:13.699 CC lib/json/json_parse.o 00:03:13.699 CC lib/rdma_utils/rdma_utils.o 00:03:13.699 CC lib/json/json_util.o 00:03:13.699 CC lib/json/json_write.o 00:03:13.699 CC lib/idxd/idxd_user.o 00:03:13.699 CC lib/idxd/idxd.o 00:03:13.699 CC lib/idxd/idxd_kernel.o 00:03:13.699 CC lib/env_dpdk/env.o 00:03:13.699 CC lib/vmd/vmd.o 00:03:13.699 CC lib/conf/conf.o 00:03:13.960 CC lib/vmd/led.o 00:03:13.960 CC lib/env_dpdk/memory.o 00:03:13.960 CC lib/env_dpdk/pci.o 00:03:13.960 CC lib/env_dpdk/init.o 00:03:13.960 LIB libspdk_conf.a 00:03:13.960 LIB libspdk_rdma_utils.a 00:03:13.960 LIB libspdk_json.a 00:03:13.960 SO libspdk_conf.so.6.0 00:03:13.960 SO libspdk_rdma_utils.so.1.0 00:03:13.960 SO libspdk_json.so.6.0 00:03:13.960 CC lib/env_dpdk/threads.o 00:03:13.960 SYMLINK libspdk_rdma_utils.so 00:03:13.960 SYMLINK libspdk_conf.so 00:03:13.960 CC lib/env_dpdk/pci_ioat.o 00:03:14.219 CC lib/env_dpdk/pci_virtio.o 00:03:14.219 SYMLINK libspdk_json.so 00:03:14.219 CC lib/env_dpdk/pci_vmd.o 00:03:14.219 CC lib/env_dpdk/pci_idxd.o 00:03:14.219 CC lib/env_dpdk/pci_event.o 00:03:14.219 CC lib/env_dpdk/sigbus_handler.o 00:03:14.219 CC lib/env_dpdk/pci_dpdk.o 00:03:14.219 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:14.479 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:14.479 LIB libspdk_idxd.a 00:03:14.479 SO libspdk_idxd.so.12.1 00:03:14.479 SYMLINK libspdk_idxd.so 00:03:14.479 CC lib/jsonrpc/jsonrpc_server.o 00:03:14.479 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:14.479 CC lib/jsonrpc/jsonrpc_client.o 00:03:14.479 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:14.479 LIB libspdk_vmd.a 00:03:14.479 CC lib/rdma_provider/common.o 00:03:14.479 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:14.739 SO libspdk_vmd.so.6.0 00:03:14.739 SYMLINK libspdk_vmd.so 00:03:14.739 LIB libspdk_rdma_provider.a 00:03:14.739 SO libspdk_rdma_provider.so.7.0 00:03:14.739 LIB libspdk_jsonrpc.a 00:03:14.999 SO libspdk_jsonrpc.so.6.0 00:03:14.999 SYMLINK libspdk_rdma_provider.so 00:03:14.999 SYMLINK libspdk_jsonrpc.so 00:03:15.260 CC lib/rpc/rpc.o 00:03:15.520 LIB libspdk_env_dpdk.a 00:03:15.781 LIB libspdk_rpc.a 00:03:15.781 SO libspdk_env_dpdk.so.15.1 00:03:15.781 SO libspdk_rpc.so.6.0 00:03:15.781 SYMLINK libspdk_rpc.so 00:03:15.781 SYMLINK libspdk_env_dpdk.so 00:03:16.041 CC lib/trace/trace.o 00:03:16.041 CC lib/keyring/keyring.o 00:03:16.041 CC lib/trace/trace_flags.o 00:03:16.041 CC lib/trace/trace_rpc.o 00:03:16.041 CC lib/keyring/keyring_rpc.o 00:03:16.041 CC lib/notify/notify.o 00:03:16.041 CC lib/notify/notify_rpc.o 00:03:16.301 LIB libspdk_notify.a 00:03:16.301 SO libspdk_notify.so.6.0 00:03:16.301 LIB libspdk_keyring.a 00:03:16.301 SYMLINK libspdk_notify.so 00:03:16.561 SO libspdk_keyring.so.2.0 00:03:16.561 LIB libspdk_trace.a 00:03:16.561 SO libspdk_trace.so.11.0 00:03:16.561 SYMLINK libspdk_keyring.so 00:03:16.561 SYMLINK libspdk_trace.so 00:03:17.132 CC lib/thread/thread.o 00:03:17.132 CC lib/thread/iobuf.o 00:03:17.132 CC lib/sock/sock.o 00:03:17.132 CC lib/sock/sock_rpc.o 00:03:17.393 LIB libspdk_sock.a 00:03:17.694 SO libspdk_sock.so.10.0 00:03:17.694 SYMLINK libspdk_sock.so 00:03:17.975 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:17.976 CC lib/nvme/nvme_ctrlr.o 00:03:17.976 CC lib/nvme/nvme_ns_cmd.o 00:03:17.976 CC lib/nvme/nvme_fabric.o 00:03:17.976 CC lib/nvme/nvme_ns.o 00:03:17.976 CC lib/nvme/nvme_pcie_common.o 00:03:17.976 CC lib/nvme/nvme_qpair.o 00:03:17.976 CC lib/nvme/nvme_pcie.o 00:03:17.976 CC lib/nvme/nvme.o 00:03:18.916 CC lib/nvme/nvme_quirks.o 00:03:18.916 LIB libspdk_thread.a 00:03:18.916 CC lib/nvme/nvme_transport.o 00:03:18.916 SO libspdk_thread.so.11.0 00:03:18.916 SYMLINK libspdk_thread.so 00:03:18.916 CC lib/nvme/nvme_discovery.o 00:03:18.916 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:18.916 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:18.916 CC lib/nvme/nvme_tcp.o 00:03:19.176 CC lib/nvme/nvme_opal.o 00:03:19.176 CC lib/nvme/nvme_io_msg.o 00:03:19.176 CC lib/nvme/nvme_poll_group.o 00:03:19.437 CC lib/nvme/nvme_zns.o 00:03:19.696 CC lib/nvme/nvme_stubs.o 00:03:19.696 CC lib/nvme/nvme_auth.o 00:03:19.696 CC lib/nvme/nvme_cuse.o 00:03:19.696 CC lib/nvme/nvme_rdma.o 00:03:19.696 CC lib/accel/accel.o 00:03:19.956 CC lib/blob/blobstore.o 00:03:19.956 CC lib/blob/request.o 00:03:19.956 CC lib/init/json_config.o 00:03:20.216 CC lib/accel/accel_rpc.o 00:03:20.216 CC lib/accel/accel_sw.o 00:03:20.216 CC lib/init/subsystem.o 00:03:20.475 CC lib/blob/zeroes.o 00:03:20.475 CC lib/init/subsystem_rpc.o 00:03:20.475 CC lib/init/rpc.o 00:03:20.735 CC lib/blob/blob_bs_dev.o 00:03:20.735 CC lib/virtio/virtio_vhost_user.o 00:03:20.735 CC lib/virtio/virtio.o 00:03:20.735 LIB libspdk_init.a 00:03:20.735 CC lib/fsdev/fsdev.o 00:03:20.735 SO libspdk_init.so.6.0 00:03:20.994 SYMLINK libspdk_init.so 00:03:20.994 CC lib/fsdev/fsdev_io.o 00:03:20.994 CC lib/virtio/virtio_vfio_user.o 00:03:20.994 CC lib/virtio/virtio_pci.o 00:03:21.254 CC lib/event/app.o 00:03:21.254 CC lib/fsdev/fsdev_rpc.o 00:03:21.254 CC lib/event/reactor.o 00:03:21.254 CC lib/event/log_rpc.o 00:03:21.254 LIB libspdk_accel.a 00:03:21.254 CC lib/event/app_rpc.o 00:03:21.254 CC lib/event/scheduler_static.o 00:03:21.254 LIB libspdk_nvme.a 00:03:21.254 SO libspdk_accel.so.16.0 00:03:21.514 LIB libspdk_virtio.a 00:03:21.514 SYMLINK libspdk_accel.so 00:03:21.514 SO libspdk_virtio.so.7.0 00:03:21.514 SO libspdk_nvme.so.15.0 00:03:21.514 SYMLINK libspdk_virtio.so 00:03:21.774 CC lib/bdev/bdev.o 00:03:21.774 CC lib/bdev/bdev_rpc.o 00:03:21.774 CC lib/bdev/scsi_nvme.o 00:03:21.774 LIB libspdk_fsdev.a 00:03:21.774 CC lib/bdev/part.o 00:03:21.774 CC lib/bdev/bdev_zone.o 00:03:21.774 LIB libspdk_event.a 00:03:21.774 SO libspdk_fsdev.so.2.0 00:03:21.774 SO libspdk_event.so.14.0 00:03:21.774 SYMLINK libspdk_nvme.so 00:03:21.774 SYMLINK libspdk_fsdev.so 00:03:21.774 SYMLINK libspdk_event.so 00:03:22.343 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:22.910 LIB libspdk_fuse_dispatcher.a 00:03:23.169 SO libspdk_fuse_dispatcher.so.1.0 00:03:23.169 SYMLINK libspdk_fuse_dispatcher.so 00:03:23.737 LIB libspdk_blob.a 00:03:23.996 SO libspdk_blob.so.11.0 00:03:23.996 SYMLINK libspdk_blob.so 00:03:24.567 CC lib/blobfs/blobfs.o 00:03:24.567 CC lib/blobfs/tree.o 00:03:24.567 CC lib/lvol/lvol.o 00:03:25.135 LIB libspdk_bdev.a 00:03:25.135 SO libspdk_bdev.so.17.0 00:03:25.135 SYMLINK libspdk_bdev.so 00:03:25.395 LIB libspdk_blobfs.a 00:03:25.395 SO libspdk_blobfs.so.10.0 00:03:25.395 CC lib/nvmf/ctrlr.o 00:03:25.395 CC lib/nvmf/ctrlr_discovery.o 00:03:25.395 CC lib/nvmf/ctrlr_bdev.o 00:03:25.395 CC lib/nvmf/subsystem.o 00:03:25.395 CC lib/nbd/nbd.o 00:03:25.395 CC lib/ftl/ftl_core.o 00:03:25.395 LIB libspdk_lvol.a 00:03:25.395 CC lib/ublk/ublk.o 00:03:25.395 CC lib/scsi/dev.o 00:03:25.395 SYMLINK libspdk_blobfs.so 00:03:25.395 SO libspdk_lvol.so.10.0 00:03:25.653 CC lib/ublk/ublk_rpc.o 00:03:25.653 SYMLINK libspdk_lvol.so 00:03:25.653 CC lib/nbd/nbd_rpc.o 00:03:25.653 CC lib/ftl/ftl_init.o 00:03:25.653 CC lib/scsi/lun.o 00:03:25.912 CC lib/scsi/port.o 00:03:25.912 CC lib/scsi/scsi.o 00:03:25.912 CC lib/ftl/ftl_layout.o 00:03:25.912 CC lib/ftl/ftl_debug.o 00:03:25.912 LIB libspdk_nbd.a 00:03:25.912 SO libspdk_nbd.so.7.0 00:03:25.912 CC lib/nvmf/nvmf.o 00:03:26.171 CC lib/ftl/ftl_io.o 00:03:26.171 CC lib/scsi/scsi_bdev.o 00:03:26.171 SYMLINK libspdk_nbd.so 00:03:26.171 CC lib/nvmf/nvmf_rpc.o 00:03:26.171 CC lib/nvmf/transport.o 00:03:26.171 LIB libspdk_ublk.a 00:03:26.430 CC lib/scsi/scsi_pr.o 00:03:26.430 SO libspdk_ublk.so.3.0 00:03:26.430 CC lib/nvmf/tcp.o 00:03:26.430 CC lib/ftl/ftl_sb.o 00:03:26.430 SYMLINK libspdk_ublk.so 00:03:26.430 CC lib/scsi/scsi_rpc.o 00:03:26.430 CC lib/scsi/task.o 00:03:26.689 CC lib/ftl/ftl_l2p.o 00:03:26.689 CC lib/nvmf/stubs.o 00:03:26.689 CC lib/nvmf/mdns_server.o 00:03:26.689 LIB libspdk_scsi.a 00:03:26.948 SO libspdk_scsi.so.9.0 00:03:26.948 CC lib/ftl/ftl_l2p_flat.o 00:03:26.948 SYMLINK libspdk_scsi.so 00:03:26.948 CC lib/ftl/ftl_nv_cache.o 00:03:26.948 CC lib/nvmf/rdma.o 00:03:27.207 CC lib/nvmf/auth.o 00:03:27.207 CC lib/ftl/ftl_band.o 00:03:27.207 CC lib/ftl/ftl_band_ops.o 00:03:27.207 CC lib/ftl/ftl_writer.o 00:03:27.466 CC lib/vhost/vhost.o 00:03:27.466 CC lib/iscsi/conn.o 00:03:27.466 CC lib/iscsi/init_grp.o 00:03:27.466 CC lib/ftl/ftl_rq.o 00:03:27.725 CC lib/ftl/ftl_reloc.o 00:03:27.725 CC lib/ftl/ftl_l2p_cache.o 00:03:27.725 CC lib/iscsi/iscsi.o 00:03:27.984 CC lib/vhost/vhost_rpc.o 00:03:27.984 CC lib/vhost/vhost_scsi.o 00:03:27.984 CC lib/vhost/vhost_blk.o 00:03:27.984 CC lib/iscsi/param.o 00:03:28.243 CC lib/ftl/ftl_p2l.o 00:03:28.243 CC lib/iscsi/portal_grp.o 00:03:28.502 CC lib/iscsi/tgt_node.o 00:03:28.502 CC lib/iscsi/iscsi_subsystem.o 00:03:28.502 CC lib/iscsi/iscsi_rpc.o 00:03:28.502 CC lib/ftl/ftl_p2l_log.o 00:03:28.502 CC lib/iscsi/task.o 00:03:28.502 CC lib/vhost/rte_vhost_user.o 00:03:28.762 CC lib/ftl/mngt/ftl_mngt.o 00:03:28.762 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:28.762 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:28.762 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:28.762 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:29.021 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:29.021 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:29.021 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:29.021 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:29.021 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:29.280 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:29.280 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:29.280 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:29.280 CC lib/ftl/utils/ftl_conf.o 00:03:29.281 CC lib/ftl/utils/ftl_md.o 00:03:29.281 CC lib/ftl/utils/ftl_mempool.o 00:03:29.281 CC lib/ftl/utils/ftl_bitmap.o 00:03:29.281 LIB libspdk_iscsi.a 00:03:29.540 CC lib/ftl/utils/ftl_property.o 00:03:29.540 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:29.540 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:29.540 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:29.540 SO libspdk_iscsi.so.8.0 00:03:29.540 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:29.540 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:29.540 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:29.540 LIB libspdk_vhost.a 00:03:29.800 LIB libspdk_nvmf.a 00:03:29.800 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:29.800 SYMLINK libspdk_iscsi.so 00:03:29.800 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:29.800 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:29.800 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:29.800 SO libspdk_vhost.so.8.0 00:03:29.800 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:29.800 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:29.800 SO libspdk_nvmf.so.20.0 00:03:29.800 SYMLINK libspdk_vhost.so 00:03:29.800 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:29.800 CC lib/ftl/base/ftl_base_dev.o 00:03:29.800 CC lib/ftl/base/ftl_base_bdev.o 00:03:30.059 CC lib/ftl/ftl_trace.o 00:03:30.059 SYMLINK libspdk_nvmf.so 00:03:30.059 LIB libspdk_ftl.a 00:03:30.318 SO libspdk_ftl.so.9.0 00:03:30.578 SYMLINK libspdk_ftl.so 00:03:31.201 CC module/env_dpdk/env_dpdk_rpc.o 00:03:31.201 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:31.201 CC module/fsdev/aio/fsdev_aio.o 00:03:31.201 CC module/accel/ioat/accel_ioat.o 00:03:31.201 CC module/keyring/file/keyring.o 00:03:31.201 CC module/accel/error/accel_error.o 00:03:31.201 CC module/sock/posix/posix.o 00:03:31.201 CC module/blob/bdev/blob_bdev.o 00:03:31.201 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:31.201 CC module/scheduler/gscheduler/gscheduler.o 00:03:31.201 LIB libspdk_env_dpdk_rpc.a 00:03:31.201 SO libspdk_env_dpdk_rpc.so.6.0 00:03:31.201 CC module/keyring/file/keyring_rpc.o 00:03:31.202 LIB libspdk_scheduler_dpdk_governor.a 00:03:31.202 SYMLINK libspdk_env_dpdk_rpc.so 00:03:31.202 CC module/accel/ioat/accel_ioat_rpc.o 00:03:31.472 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:31.472 CC module/accel/error/accel_error_rpc.o 00:03:31.472 LIB libspdk_scheduler_gscheduler.a 00:03:31.472 LIB libspdk_scheduler_dynamic.a 00:03:31.472 SO libspdk_scheduler_gscheduler.so.4.0 00:03:31.472 SO libspdk_scheduler_dynamic.so.4.0 00:03:31.472 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:31.472 LIB libspdk_keyring_file.a 00:03:31.472 SYMLINK libspdk_scheduler_dynamic.so 00:03:31.472 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:31.472 LIB libspdk_blob_bdev.a 00:03:31.472 SYMLINK libspdk_scheduler_gscheduler.so 00:03:31.472 CC module/fsdev/aio/linux_aio_mgr.o 00:03:31.472 LIB libspdk_accel_ioat.a 00:03:31.472 SO libspdk_keyring_file.so.2.0 00:03:31.472 SO libspdk_blob_bdev.so.11.0 00:03:31.472 SO libspdk_accel_ioat.so.6.0 00:03:31.472 LIB libspdk_accel_error.a 00:03:31.472 SO libspdk_accel_error.so.2.0 00:03:31.472 SYMLINK libspdk_keyring_file.so 00:03:31.472 CC module/accel/dsa/accel_dsa.o 00:03:31.472 SYMLINK libspdk_blob_bdev.so 00:03:31.472 CC module/accel/dsa/accel_dsa_rpc.o 00:03:31.472 SYMLINK libspdk_accel_ioat.so 00:03:31.472 SYMLINK libspdk_accel_error.so 00:03:31.472 CC module/accel/iaa/accel_iaa.o 00:03:31.472 CC module/accel/iaa/accel_iaa_rpc.o 00:03:31.731 CC module/keyring/linux/keyring.o 00:03:31.731 LIB libspdk_accel_iaa.a 00:03:31.731 SO libspdk_accel_iaa.so.3.0 00:03:31.731 LIB libspdk_accel_dsa.a 00:03:31.731 CC module/bdev/delay/vbdev_delay.o 00:03:31.731 CC module/blobfs/bdev/blobfs_bdev.o 00:03:31.731 CC module/bdev/error/vbdev_error.o 00:03:31.731 SO libspdk_accel_dsa.so.5.0 00:03:31.731 CC module/keyring/linux/keyring_rpc.o 00:03:31.731 CC module/bdev/gpt/gpt.o 00:03:31.990 SYMLINK libspdk_accel_iaa.so 00:03:31.990 CC module/bdev/gpt/vbdev_gpt.o 00:03:31.990 CC module/bdev/lvol/vbdev_lvol.o 00:03:31.990 SYMLINK libspdk_accel_dsa.so 00:03:31.990 CC module/bdev/error/vbdev_error_rpc.o 00:03:31.990 LIB libspdk_fsdev_aio.a 00:03:31.990 SO libspdk_fsdev_aio.so.1.0 00:03:31.990 LIB libspdk_keyring_linux.a 00:03:31.990 LIB libspdk_sock_posix.a 00:03:31.990 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:31.990 SO libspdk_keyring_linux.so.1.0 00:03:31.990 SO libspdk_sock_posix.so.6.0 00:03:31.990 SYMLINK libspdk_fsdev_aio.so 00:03:31.990 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:31.990 SYMLINK libspdk_keyring_linux.so 00:03:31.990 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:31.990 LIB libspdk_bdev_error.a 00:03:31.990 SYMLINK libspdk_sock_posix.so 00:03:32.249 SO libspdk_bdev_error.so.6.0 00:03:32.249 LIB libspdk_bdev_gpt.a 00:03:32.249 LIB libspdk_blobfs_bdev.a 00:03:32.249 SO libspdk_bdev_gpt.so.6.0 00:03:32.249 SYMLINK libspdk_bdev_error.so 00:03:32.249 SO libspdk_blobfs_bdev.so.6.0 00:03:32.249 CC module/bdev/malloc/bdev_malloc.o 00:03:32.249 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:32.249 CC module/bdev/null/bdev_null.o 00:03:32.249 SYMLINK libspdk_bdev_gpt.so 00:03:32.249 CC module/bdev/null/bdev_null_rpc.o 00:03:32.249 CC module/bdev/nvme/bdev_nvme.o 00:03:32.249 SYMLINK libspdk_blobfs_bdev.so 00:03:32.249 LIB libspdk_bdev_delay.a 00:03:32.249 SO libspdk_bdev_delay.so.6.0 00:03:32.508 SYMLINK libspdk_bdev_delay.so 00:03:32.508 CC module/bdev/passthru/vbdev_passthru.o 00:03:32.508 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:32.508 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.508 CC module/bdev/raid/bdev_raid.o 00:03:32.508 LIB libspdk_bdev_lvol.a 00:03:32.508 SO libspdk_bdev_lvol.so.6.0 00:03:32.508 LIB libspdk_bdev_null.a 00:03:32.508 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:32.508 SO libspdk_bdev_null.so.6.0 00:03:32.508 CC module/bdev/nvme/nvme_rpc.o 00:03:32.508 SYMLINK libspdk_bdev_lvol.so 00:03:32.508 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.508 CC module/bdev/split/vbdev_split.o 00:03:32.769 LIB libspdk_bdev_malloc.a 00:03:32.769 SYMLINK libspdk_bdev_null.so 00:03:32.769 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.769 SO libspdk_bdev_malloc.so.6.0 00:03:32.769 LIB libspdk_bdev_passthru.a 00:03:32.769 SYMLINK libspdk_bdev_malloc.so 00:03:32.769 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:32.769 SO libspdk_bdev_passthru.so.6.0 00:03:32.769 CC module/bdev/nvme/vbdev_opal.o 00:03:32.769 SYMLINK libspdk_bdev_passthru.so 00:03:32.769 CC module/bdev/raid/bdev_raid_rpc.o 00:03:32.769 LIB libspdk_bdev_split.a 00:03:33.030 CC module/bdev/raid/bdev_raid_sb.o 00:03:33.030 SO libspdk_bdev_split.so.6.0 00:03:33.030 LIB libspdk_bdev_zone_block.a 00:03:33.030 CC module/bdev/aio/bdev_aio.o 00:03:33.030 SO libspdk_bdev_zone_block.so.6.0 00:03:33.030 SYMLINK libspdk_bdev_split.so 00:03:33.030 CC module/bdev/aio/bdev_aio_rpc.o 00:03:33.030 CC module/bdev/ftl/bdev_ftl.o 00:03:33.030 SYMLINK libspdk_bdev_zone_block.so 00:03:33.030 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:33.030 CC module/bdev/raid/raid0.o 00:03:33.030 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:33.288 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:33.288 CC module/bdev/iscsi/bdev_iscsi.o 00:03:33.288 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:33.288 LIB libspdk_bdev_aio.a 00:03:33.288 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:33.288 CC module/bdev/raid/raid1.o 00:03:33.288 LIB libspdk_bdev_ftl.a 00:03:33.288 SO libspdk_bdev_aio.so.6.0 00:03:33.288 SO libspdk_bdev_ftl.so.6.0 00:03:33.288 CC module/bdev/raid/concat.o 00:03:33.288 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:33.288 SYMLINK libspdk_bdev_aio.so 00:03:33.288 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:33.288 CC module/bdev/raid/raid5f.o 00:03:33.288 SYMLINK libspdk_bdev_ftl.so 00:03:33.547 LIB libspdk_bdev_iscsi.a 00:03:33.547 SO libspdk_bdev_iscsi.so.6.0 00:03:33.806 SYMLINK libspdk_bdev_iscsi.so 00:03:33.806 LIB libspdk_bdev_virtio.a 00:03:33.806 SO libspdk_bdev_virtio.so.6.0 00:03:33.806 LIB libspdk_bdev_raid.a 00:03:34.066 SYMLINK libspdk_bdev_virtio.so 00:03:34.066 SO libspdk_bdev_raid.so.6.0 00:03:34.066 SYMLINK libspdk_bdev_raid.so 00:03:35.005 LIB libspdk_bdev_nvme.a 00:03:35.005 SO libspdk_bdev_nvme.so.7.1 00:03:35.264 SYMLINK libspdk_bdev_nvme.so 00:03:35.833 CC module/event/subsystems/scheduler/scheduler.o 00:03:35.833 CC module/event/subsystems/keyring/keyring.o 00:03:35.833 CC module/event/subsystems/fsdev/fsdev.o 00:03:35.833 CC module/event/subsystems/vmd/vmd.o 00:03:35.833 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:35.833 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:35.833 CC module/event/subsystems/iobuf/iobuf.o 00:03:35.833 CC module/event/subsystems/sock/sock.o 00:03:35.833 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:35.833 LIB libspdk_event_scheduler.a 00:03:35.833 LIB libspdk_event_fsdev.a 00:03:35.833 LIB libspdk_event_vmd.a 00:03:35.833 LIB libspdk_event_sock.a 00:03:35.833 LIB libspdk_event_vhost_blk.a 00:03:35.833 LIB libspdk_event_keyring.a 00:03:35.833 LIB libspdk_event_iobuf.a 00:03:35.833 SO libspdk_event_scheduler.so.4.0 00:03:35.833 SO libspdk_event_fsdev.so.1.0 00:03:35.833 SO libspdk_event_sock.so.5.0 00:03:35.833 SO libspdk_event_vmd.so.6.0 00:03:35.833 SO libspdk_event_keyring.so.1.0 00:03:35.833 SO libspdk_event_vhost_blk.so.3.0 00:03:35.833 SO libspdk_event_iobuf.so.3.0 00:03:35.833 SYMLINK libspdk_event_sock.so 00:03:35.833 SYMLINK libspdk_event_scheduler.so 00:03:35.833 SYMLINK libspdk_event_keyring.so 00:03:35.833 SYMLINK libspdk_event_fsdev.so 00:03:35.833 SYMLINK libspdk_event_vmd.so 00:03:36.092 SYMLINK libspdk_event_vhost_blk.so 00:03:36.092 SYMLINK libspdk_event_iobuf.so 00:03:36.352 CC module/event/subsystems/accel/accel.o 00:03:36.611 LIB libspdk_event_accel.a 00:03:36.611 SO libspdk_event_accel.so.6.0 00:03:36.611 SYMLINK libspdk_event_accel.so 00:03:37.181 CC module/event/subsystems/bdev/bdev.o 00:03:37.181 LIB libspdk_event_bdev.a 00:03:37.181 SO libspdk_event_bdev.so.6.0 00:03:37.441 SYMLINK libspdk_event_bdev.so 00:03:37.700 CC module/event/subsystems/nbd/nbd.o 00:03:37.700 CC module/event/subsystems/scsi/scsi.o 00:03:37.700 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:37.700 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:37.700 CC module/event/subsystems/ublk/ublk.o 00:03:37.700 LIB libspdk_event_nbd.a 00:03:37.700 LIB libspdk_event_ublk.a 00:03:37.959 SO libspdk_event_nbd.so.6.0 00:03:37.959 SO libspdk_event_ublk.so.3.0 00:03:37.959 LIB libspdk_event_scsi.a 00:03:37.959 SO libspdk_event_scsi.so.6.0 00:03:37.959 LIB libspdk_event_nvmf.a 00:03:37.959 SYMLINK libspdk_event_ublk.so 00:03:37.959 SYMLINK libspdk_event_nbd.so 00:03:37.959 SO libspdk_event_nvmf.so.6.0 00:03:37.959 SYMLINK libspdk_event_scsi.so 00:03:37.959 SYMLINK libspdk_event_nvmf.so 00:03:38.219 CC module/event/subsystems/iscsi/iscsi.o 00:03:38.219 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:38.479 LIB libspdk_event_vhost_scsi.a 00:03:38.479 LIB libspdk_event_iscsi.a 00:03:38.479 SO libspdk_event_vhost_scsi.so.3.0 00:03:38.479 SO libspdk_event_iscsi.so.6.0 00:03:38.479 SYMLINK libspdk_event_vhost_scsi.so 00:03:38.738 SYMLINK libspdk_event_iscsi.so 00:03:38.738 SO libspdk.so.6.0 00:03:38.738 SYMLINK libspdk.so 00:03:39.307 CC app/trace_record/trace_record.o 00:03:39.307 CC app/spdk_lspci/spdk_lspci.o 00:03:39.307 CXX app/trace/trace.o 00:03:39.307 CC app/spdk_nvme_perf/perf.o 00:03:39.307 CC app/iscsi_tgt/iscsi_tgt.o 00:03:39.307 CC app/nvmf_tgt/nvmf_main.o 00:03:39.307 CC app/spdk_tgt/spdk_tgt.o 00:03:39.307 CC test/thread/poller_perf/poller_perf.o 00:03:39.307 CC examples/util/zipf/zipf.o 00:03:39.307 CC examples/ioat/perf/perf.o 00:03:39.307 LINK spdk_lspci 00:03:39.307 LINK iscsi_tgt 00:03:39.307 LINK nvmf_tgt 00:03:39.307 LINK spdk_trace_record 00:03:39.307 LINK zipf 00:03:39.307 LINK poller_perf 00:03:39.307 LINK spdk_tgt 00:03:39.566 LINK ioat_perf 00:03:39.566 CC app/spdk_nvme_identify/identify.o 00:03:39.566 LINK spdk_trace 00:03:39.566 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.566 CC app/spdk_top/spdk_top.o 00:03:39.566 CC app/spdk_nvme_discover/discovery_aer.o 00:03:39.566 CC examples/ioat/verify/verify.o 00:03:39.825 CC test/dma/test_dma/test_dma.o 00:03:39.825 CC test/app/bdev_svc/bdev_svc.o 00:03:39.825 LINK interrupt_tgt 00:03:39.825 CC examples/thread/thread/thread_ex.o 00:03:39.825 LINK verify 00:03:39.825 LINK spdk_nvme_discover 00:03:40.087 LINK bdev_svc 00:03:40.087 LINK spdk_nvme_perf 00:03:40.087 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.087 CC test/app/histogram_perf/histogram_perf.o 00:03:40.087 LINK thread 00:03:40.087 LINK histogram_perf 00:03:40.087 LINK test_dma 00:03:40.358 CC examples/sock/hello_world/hello_sock.o 00:03:40.358 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.358 CC examples/idxd/perf/perf.o 00:03:40.358 CC test/app/jsoncat/jsoncat.o 00:03:40.358 LINK lsvmd 00:03:40.358 LINK nvme_fuzz 00:03:40.358 CC examples/accel/perf/accel_perf.o 00:03:40.358 LINK spdk_nvme_identify 00:03:40.358 LINK hello_sock 00:03:40.358 LINK jsoncat 00:03:40.358 CC test/app/stub/stub.o 00:03:40.632 CC examples/blob/hello_world/hello_blob.o 00:03:40.632 CC examples/vmd/led/led.o 00:03:40.632 LINK stub 00:03:40.632 LINK idxd_perf 00:03:40.632 CC examples/blob/cli/blobcli.o 00:03:40.632 LINK spdk_top 00:03:40.632 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.632 LINK led 00:03:40.632 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:40.892 LINK hello_blob 00:03:40.892 CC app/spdk_dd/spdk_dd.o 00:03:40.892 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:40.892 LINK accel_perf 00:03:40.892 CC app/fio/nvme/fio_plugin.o 00:03:40.892 CC app/fio/bdev/fio_plugin.o 00:03:40.892 TEST_HEADER include/spdk/accel.h 00:03:40.892 TEST_HEADER include/spdk/accel_module.h 00:03:40.892 TEST_HEADER include/spdk/assert.h 00:03:40.892 TEST_HEADER include/spdk/barrier.h 00:03:40.892 TEST_HEADER include/spdk/base64.h 00:03:40.892 TEST_HEADER include/spdk/bdev.h 00:03:40.892 TEST_HEADER include/spdk/bdev_module.h 00:03:40.892 TEST_HEADER include/spdk/bdev_zone.h 00:03:40.892 TEST_HEADER include/spdk/bit_array.h 00:03:40.892 TEST_HEADER include/spdk/bit_pool.h 00:03:40.892 TEST_HEADER include/spdk/blob_bdev.h 00:03:40.892 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:40.892 TEST_HEADER include/spdk/blobfs.h 00:03:40.892 TEST_HEADER include/spdk/blob.h 00:03:40.892 TEST_HEADER include/spdk/conf.h 00:03:40.892 TEST_HEADER include/spdk/config.h 00:03:40.892 CC app/vhost/vhost.o 00:03:40.892 TEST_HEADER include/spdk/cpuset.h 00:03:40.892 TEST_HEADER include/spdk/crc16.h 00:03:40.892 TEST_HEADER include/spdk/crc32.h 00:03:40.892 TEST_HEADER include/spdk/crc64.h 00:03:40.892 TEST_HEADER include/spdk/dif.h 00:03:40.892 TEST_HEADER include/spdk/dma.h 00:03:40.892 TEST_HEADER include/spdk/endian.h 00:03:41.152 TEST_HEADER include/spdk/env_dpdk.h 00:03:41.152 TEST_HEADER include/spdk/env.h 00:03:41.152 TEST_HEADER include/spdk/event.h 00:03:41.152 TEST_HEADER include/spdk/fd_group.h 00:03:41.152 TEST_HEADER include/spdk/fd.h 00:03:41.152 TEST_HEADER include/spdk/file.h 00:03:41.152 TEST_HEADER include/spdk/fsdev.h 00:03:41.152 TEST_HEADER include/spdk/fsdev_module.h 00:03:41.152 TEST_HEADER include/spdk/ftl.h 00:03:41.152 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:41.152 TEST_HEADER include/spdk/gpt_spec.h 00:03:41.153 TEST_HEADER include/spdk/hexlify.h 00:03:41.153 TEST_HEADER include/spdk/histogram_data.h 00:03:41.153 TEST_HEADER include/spdk/idxd.h 00:03:41.153 TEST_HEADER include/spdk/idxd_spec.h 00:03:41.153 TEST_HEADER include/spdk/init.h 00:03:41.153 TEST_HEADER include/spdk/ioat.h 00:03:41.153 TEST_HEADER include/spdk/ioat_spec.h 00:03:41.153 TEST_HEADER include/spdk/iscsi_spec.h 00:03:41.153 TEST_HEADER include/spdk/json.h 00:03:41.153 TEST_HEADER include/spdk/jsonrpc.h 00:03:41.153 TEST_HEADER include/spdk/keyring.h 00:03:41.153 TEST_HEADER include/spdk/keyring_module.h 00:03:41.153 TEST_HEADER include/spdk/likely.h 00:03:41.153 TEST_HEADER include/spdk/log.h 00:03:41.153 TEST_HEADER include/spdk/lvol.h 00:03:41.153 TEST_HEADER include/spdk/md5.h 00:03:41.153 TEST_HEADER include/spdk/memory.h 00:03:41.153 TEST_HEADER include/spdk/mmio.h 00:03:41.153 TEST_HEADER include/spdk/nbd.h 00:03:41.153 TEST_HEADER include/spdk/net.h 00:03:41.153 TEST_HEADER include/spdk/notify.h 00:03:41.153 TEST_HEADER include/spdk/nvme.h 00:03:41.153 TEST_HEADER include/spdk/nvme_intel.h 00:03:41.153 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:41.153 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:41.153 TEST_HEADER include/spdk/nvme_spec.h 00:03:41.153 TEST_HEADER include/spdk/nvme_zns.h 00:03:41.153 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:41.153 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:41.153 TEST_HEADER include/spdk/nvmf.h 00:03:41.153 TEST_HEADER include/spdk/nvmf_spec.h 00:03:41.153 TEST_HEADER include/spdk/nvmf_transport.h 00:03:41.153 TEST_HEADER include/spdk/opal.h 00:03:41.153 TEST_HEADER include/spdk/opal_spec.h 00:03:41.153 TEST_HEADER include/spdk/pci_ids.h 00:03:41.153 TEST_HEADER include/spdk/pipe.h 00:03:41.153 TEST_HEADER include/spdk/queue.h 00:03:41.153 TEST_HEADER include/spdk/reduce.h 00:03:41.153 TEST_HEADER include/spdk/rpc.h 00:03:41.153 TEST_HEADER include/spdk/scheduler.h 00:03:41.153 TEST_HEADER include/spdk/scsi.h 00:03:41.153 TEST_HEADER include/spdk/scsi_spec.h 00:03:41.153 TEST_HEADER include/spdk/sock.h 00:03:41.153 TEST_HEADER include/spdk/stdinc.h 00:03:41.153 TEST_HEADER include/spdk/string.h 00:03:41.153 TEST_HEADER include/spdk/thread.h 00:03:41.153 TEST_HEADER include/spdk/trace.h 00:03:41.153 TEST_HEADER include/spdk/trace_parser.h 00:03:41.153 TEST_HEADER include/spdk/tree.h 00:03:41.153 TEST_HEADER include/spdk/ublk.h 00:03:41.153 TEST_HEADER include/spdk/util.h 00:03:41.153 TEST_HEADER include/spdk/uuid.h 00:03:41.153 TEST_HEADER include/spdk/version.h 00:03:41.153 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:41.153 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:41.153 TEST_HEADER include/spdk/vhost.h 00:03:41.153 TEST_HEADER include/spdk/vmd.h 00:03:41.153 TEST_HEADER include/spdk/xor.h 00:03:41.153 TEST_HEADER include/spdk/zipf.h 00:03:41.153 CXX test/cpp_headers/accel.o 00:03:41.153 CC test/env/mem_callbacks/mem_callbacks.o 00:03:41.153 LINK vhost 00:03:41.153 LINK spdk_dd 00:03:41.153 CC test/env/vtophys/vtophys.o 00:03:41.153 LINK blobcli 00:03:41.413 CXX test/cpp_headers/accel_module.o 00:03:41.413 LINK vtophys 00:03:41.413 LINK vhost_fuzz 00:03:41.413 CXX test/cpp_headers/assert.o 00:03:41.413 LINK spdk_bdev 00:03:41.413 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:41.413 CC test/env/memory/memory_ut.o 00:03:41.413 CXX test/cpp_headers/barrier.o 00:03:41.413 LINK spdk_nvme 00:03:41.673 CXX test/cpp_headers/base64.o 00:03:41.673 CC test/env/pci/pci_ut.o 00:03:41.673 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:41.673 LINK env_dpdk_post_init 00:03:41.673 CXX test/cpp_headers/bdev.o 00:03:41.673 CC test/rpc_client/rpc_client_test.o 00:03:41.674 LINK mem_callbacks 00:03:41.674 CC test/event/event_perf/event_perf.o 00:03:41.674 CC test/nvme/aer/aer.o 00:03:41.934 CXX test/cpp_headers/bdev_module.o 00:03:41.934 LINK hello_fsdev 00:03:41.934 LINK event_perf 00:03:41.934 LINK rpc_client_test 00:03:41.934 CXX test/cpp_headers/bdev_zone.o 00:03:41.934 LINK pci_ut 00:03:42.194 LINK aer 00:03:42.194 CC test/accel/dif/dif.o 00:03:42.194 CC test/event/reactor/reactor.o 00:03:42.194 CC test/blobfs/mkfs/mkfs.o 00:03:42.194 CXX test/cpp_headers/bit_array.o 00:03:42.194 LINK reactor 00:03:42.194 CC examples/bdev/hello_world/hello_bdev.o 00:03:42.194 CXX test/cpp_headers/bit_pool.o 00:03:42.194 CC test/event/reactor_perf/reactor_perf.o 00:03:42.454 CC test/nvme/reset/reset.o 00:03:42.454 LINK mkfs 00:03:42.454 CC test/lvol/esnap/esnap.o 00:03:42.454 CXX test/cpp_headers/blob_bdev.o 00:03:42.454 LINK iscsi_fuzz 00:03:42.454 CC test/event/app_repeat/app_repeat.o 00:03:42.454 LINK hello_bdev 00:03:42.454 LINK reactor_perf 00:03:42.713 CXX test/cpp_headers/blobfs_bdev.o 00:03:42.713 CXX test/cpp_headers/blobfs.o 00:03:42.713 LINK reset 00:03:42.713 LINK app_repeat 00:03:42.713 CC examples/nvme/hello_world/hello_world.o 00:03:42.713 CC test/nvme/sgl/sgl.o 00:03:42.713 CC examples/bdev/bdevperf/bdevperf.o 00:03:42.713 CXX test/cpp_headers/blob.o 00:03:42.713 LINK memory_ut 00:03:42.713 CC test/nvme/e2edp/nvme_dp.o 00:03:42.713 LINK dif 00:03:42.973 CXX test/cpp_headers/conf.o 00:03:42.973 CC test/nvme/overhead/overhead.o 00:03:42.973 CC test/event/scheduler/scheduler.o 00:03:42.973 LINK hello_world 00:03:42.973 CXX test/cpp_headers/config.o 00:03:42.973 LINK sgl 00:03:42.973 CXX test/cpp_headers/cpuset.o 00:03:42.973 CC test/nvme/err_injection/err_injection.o 00:03:43.232 LINK nvme_dp 00:03:43.232 CC test/nvme/startup/startup.o 00:03:43.232 LINK scheduler 00:03:43.232 CC examples/nvme/reconnect/reconnect.o 00:03:43.232 LINK overhead 00:03:43.232 CXX test/cpp_headers/crc16.o 00:03:43.232 LINK err_injection 00:03:43.232 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:43.232 CC test/nvme/reserve/reserve.o 00:03:43.232 LINK startup 00:03:43.490 CXX test/cpp_headers/crc32.o 00:03:43.490 CC examples/nvme/hotplug/hotplug.o 00:03:43.490 CC examples/nvme/arbitration/arbitration.o 00:03:43.490 LINK reconnect 00:03:43.490 CXX test/cpp_headers/crc64.o 00:03:43.490 CC test/nvme/simple_copy/simple_copy.o 00:03:43.490 LINK reserve 00:03:43.490 LINK bdevperf 00:03:43.750 CXX test/cpp_headers/dif.o 00:03:43.750 CXX test/cpp_headers/dma.o 00:03:43.750 LINK hotplug 00:03:43.750 CC test/bdev/bdevio/bdevio.o 00:03:43.750 LINK nvme_manage 00:03:43.750 CC test/nvme/connect_stress/connect_stress.o 00:03:43.750 LINK simple_copy 00:03:43.750 CXX test/cpp_headers/endian.o 00:03:43.750 LINK arbitration 00:03:43.750 CC test/nvme/boot_partition/boot_partition.o 00:03:44.010 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:44.010 CC examples/nvme/abort/abort.o 00:03:44.010 CXX test/cpp_headers/env_dpdk.o 00:03:44.010 LINK connect_stress 00:03:44.010 CXX test/cpp_headers/env.o 00:03:44.010 LINK boot_partition 00:03:44.010 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:44.010 CC test/nvme/compliance/nvme_compliance.o 00:03:44.010 LINK cmb_copy 00:03:44.010 CXX test/cpp_headers/event.o 00:03:44.010 LINK bdevio 00:03:44.010 CC test/nvme/fused_ordering/fused_ordering.o 00:03:44.270 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:44.270 LINK pmr_persistence 00:03:44.270 CXX test/cpp_headers/fd_group.o 00:03:44.270 CC test/nvme/fdp/fdp.o 00:03:44.270 LINK abort 00:03:44.270 CC test/nvme/cuse/cuse.o 00:03:44.270 LINK fused_ordering 00:03:44.270 CXX test/cpp_headers/fd.o 00:03:44.270 CXX test/cpp_headers/file.o 00:03:44.270 LINK nvme_compliance 00:03:44.270 LINK doorbell_aers 00:03:44.270 CXX test/cpp_headers/fsdev.o 00:03:44.530 CXX test/cpp_headers/fsdev_module.o 00:03:44.530 CXX test/cpp_headers/ftl.o 00:03:44.530 CXX test/cpp_headers/fuse_dispatcher.o 00:03:44.530 CXX test/cpp_headers/gpt_spec.o 00:03:44.530 CXX test/cpp_headers/hexlify.o 00:03:44.530 CXX test/cpp_headers/histogram_data.o 00:03:44.530 LINK fdp 00:03:44.530 CC examples/nvmf/nvmf/nvmf.o 00:03:44.530 CXX test/cpp_headers/idxd.o 00:03:44.530 CXX test/cpp_headers/idxd_spec.o 00:03:44.789 CXX test/cpp_headers/init.o 00:03:44.789 CXX test/cpp_headers/ioat.o 00:03:44.789 CXX test/cpp_headers/ioat_spec.o 00:03:44.789 CXX test/cpp_headers/iscsi_spec.o 00:03:44.789 CXX test/cpp_headers/json.o 00:03:44.789 CXX test/cpp_headers/jsonrpc.o 00:03:44.789 CXX test/cpp_headers/keyring.o 00:03:44.789 CXX test/cpp_headers/keyring_module.o 00:03:44.789 CXX test/cpp_headers/likely.o 00:03:44.789 CXX test/cpp_headers/log.o 00:03:44.789 CXX test/cpp_headers/lvol.o 00:03:44.789 CXX test/cpp_headers/md5.o 00:03:44.789 LINK nvmf 00:03:45.050 CXX test/cpp_headers/memory.o 00:03:45.050 CXX test/cpp_headers/mmio.o 00:03:45.050 CXX test/cpp_headers/nbd.o 00:03:45.050 CXX test/cpp_headers/net.o 00:03:45.050 CXX test/cpp_headers/notify.o 00:03:45.050 CXX test/cpp_headers/nvme.o 00:03:45.050 CXX test/cpp_headers/nvme_intel.o 00:03:45.050 CXX test/cpp_headers/nvme_ocssd.o 00:03:45.050 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:45.050 CXX test/cpp_headers/nvme_spec.o 00:03:45.050 CXX test/cpp_headers/nvme_zns.o 00:03:45.050 CXX test/cpp_headers/nvmf_cmd.o 00:03:45.050 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:45.050 CXX test/cpp_headers/nvmf.o 00:03:45.050 CXX test/cpp_headers/nvmf_spec.o 00:03:45.310 CXX test/cpp_headers/nvmf_transport.o 00:03:45.310 CXX test/cpp_headers/opal.o 00:03:45.310 CXX test/cpp_headers/opal_spec.o 00:03:45.310 CXX test/cpp_headers/pci_ids.o 00:03:45.310 CXX test/cpp_headers/pipe.o 00:03:45.310 CXX test/cpp_headers/queue.o 00:03:45.310 CXX test/cpp_headers/reduce.o 00:03:45.310 CXX test/cpp_headers/rpc.o 00:03:45.310 CXX test/cpp_headers/scheduler.o 00:03:45.310 CXX test/cpp_headers/scsi.o 00:03:45.310 CXX test/cpp_headers/scsi_spec.o 00:03:45.570 CXX test/cpp_headers/sock.o 00:03:45.570 CXX test/cpp_headers/stdinc.o 00:03:45.570 CXX test/cpp_headers/string.o 00:03:45.570 CXX test/cpp_headers/thread.o 00:03:45.570 CXX test/cpp_headers/trace.o 00:03:45.570 CXX test/cpp_headers/trace_parser.o 00:03:45.570 CXX test/cpp_headers/tree.o 00:03:45.570 CXX test/cpp_headers/ublk.o 00:03:45.570 CXX test/cpp_headers/util.o 00:03:45.570 CXX test/cpp_headers/uuid.o 00:03:45.570 CXX test/cpp_headers/version.o 00:03:45.570 CXX test/cpp_headers/vfio_user_pci.o 00:03:45.570 LINK cuse 00:03:45.570 CXX test/cpp_headers/vfio_user_spec.o 00:03:45.570 CXX test/cpp_headers/vhost.o 00:03:45.570 CXX test/cpp_headers/vmd.o 00:03:45.570 CXX test/cpp_headers/xor.o 00:03:45.830 CXX test/cpp_headers/zipf.o 00:03:48.371 LINK esnap 00:03:48.631 00:03:48.631 real 1m24.367s 00:03:48.631 user 7m31.422s 00:03:48.631 sys 1m38.236s 00:03:48.631 03:52:45 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:48.631 ************************************ 00:03:48.631 END TEST make 00:03:48.631 ************************************ 00:03:48.631 03:52:45 make -- common/autotest_common.sh@10 -- $ set +x 00:03:48.631 03:52:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:48.631 03:52:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:48.631 03:52:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:48.631 03:52:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.631 03:52:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:48.631 03:52:45 -- pm/common@44 -- $ pid=5475 00:03:48.631 03:52:45 -- pm/common@50 -- $ kill -TERM 5475 00:03:48.631 03:52:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.631 03:52:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:48.631 03:52:45 -- pm/common@44 -- $ pid=5477 00:03:48.631 03:52:45 -- pm/common@50 -- $ kill -TERM 5477 00:03:48.631 03:52:45 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:48.631 03:52:45 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:48.892 03:52:45 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:48.892 03:52:45 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:48.892 03:52:45 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:48.892 03:52:45 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:48.892 03:52:45 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.892 03:52:45 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.892 03:52:45 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.892 03:52:45 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.892 03:52:45 -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.892 03:52:45 -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.892 03:52:45 -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.892 03:52:45 -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.892 03:52:45 -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.892 03:52:45 -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.892 03:52:45 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.892 03:52:45 -- scripts/common.sh@344 -- # case "$op" in 00:03:48.892 03:52:45 -- scripts/common.sh@345 -- # : 1 00:03:48.892 03:52:45 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.892 03:52:45 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.892 03:52:45 -- scripts/common.sh@365 -- # decimal 1 00:03:48.892 03:52:45 -- scripts/common.sh@353 -- # local d=1 00:03:48.892 03:52:45 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.892 03:52:45 -- scripts/common.sh@355 -- # echo 1 00:03:48.892 03:52:45 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.892 03:52:45 -- scripts/common.sh@366 -- # decimal 2 00:03:48.892 03:52:45 -- scripts/common.sh@353 -- # local d=2 00:03:48.892 03:52:45 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.892 03:52:45 -- scripts/common.sh@355 -- # echo 2 00:03:48.892 03:52:45 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.892 03:52:45 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.892 03:52:45 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.892 03:52:45 -- scripts/common.sh@368 -- # return 0 00:03:48.892 03:52:45 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.892 03:52:45 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:48.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.892 --rc genhtml_branch_coverage=1 00:03:48.892 --rc genhtml_function_coverage=1 00:03:48.892 --rc genhtml_legend=1 00:03:48.892 --rc geninfo_all_blocks=1 00:03:48.892 --rc geninfo_unexecuted_blocks=1 00:03:48.892 00:03:48.892 ' 00:03:48.892 03:52:45 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:48.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.892 --rc genhtml_branch_coverage=1 00:03:48.892 --rc genhtml_function_coverage=1 00:03:48.892 --rc genhtml_legend=1 00:03:48.892 --rc geninfo_all_blocks=1 00:03:48.892 --rc geninfo_unexecuted_blocks=1 00:03:48.892 00:03:48.892 ' 00:03:48.892 03:52:45 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:48.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.892 --rc genhtml_branch_coverage=1 00:03:48.892 --rc genhtml_function_coverage=1 00:03:48.892 --rc genhtml_legend=1 00:03:48.892 --rc geninfo_all_blocks=1 00:03:48.892 --rc geninfo_unexecuted_blocks=1 00:03:48.892 00:03:48.892 ' 00:03:48.892 03:52:45 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:48.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.892 --rc genhtml_branch_coverage=1 00:03:48.892 --rc genhtml_function_coverage=1 00:03:48.892 --rc genhtml_legend=1 00:03:48.892 --rc geninfo_all_blocks=1 00:03:48.892 --rc geninfo_unexecuted_blocks=1 00:03:48.892 00:03:48.892 ' 00:03:48.892 03:52:45 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:48.892 03:52:45 -- nvmf/common.sh@7 -- # uname -s 00:03:48.892 03:52:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.892 03:52:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.892 03:52:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.892 03:52:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.892 03:52:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.892 03:52:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.892 03:52:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.892 03:52:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.892 03:52:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.892 03:52:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.892 03:52:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71afe1ad-b1cd-47b1-a3e0-2d96376cb6e9 00:03:48.892 03:52:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=71afe1ad-b1cd-47b1-a3e0-2d96376cb6e9 00:03:48.892 03:52:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.892 03:52:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.892 03:52:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:48.892 03:52:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:48.892 03:52:45 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:48.892 03:52:45 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:48.892 03:52:45 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.892 03:52:45 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.892 03:52:45 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.892 03:52:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.892 03:52:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.892 03:52:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.892 03:52:45 -- paths/export.sh@5 -- # export PATH 00:03:48.892 03:52:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.892 03:52:45 -- nvmf/common.sh@51 -- # : 0 00:03:48.892 03:52:45 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:48.893 03:52:45 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:48.893 03:52:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:48.893 03:52:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.893 03:52:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.893 03:52:45 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:48.893 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:48.893 03:52:45 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:48.893 03:52:45 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:48.893 03:52:45 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:48.893 03:52:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:48.893 03:52:45 -- spdk/autotest.sh@32 -- # uname -s 00:03:48.893 03:52:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:48.893 03:52:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:48.893 03:52:45 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:48.893 03:52:45 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:48.893 03:52:45 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:48.893 03:52:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:48.893 03:52:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:48.893 03:52:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:48.893 03:52:45 -- spdk/autotest.sh@48 -- # udevadm_pid=54445 00:03:48.893 03:52:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:48.893 03:52:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:48.893 03:52:45 -- pm/common@17 -- # local monitor 00:03:48.893 03:52:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.893 03:52:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.893 03:52:45 -- pm/common@21 -- # date +%s 00:03:48.893 03:52:45 -- pm/common@25 -- # sleep 1 00:03:48.893 03:52:45 -- pm/common@21 -- # date +%s 00:03:48.893 03:52:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731901965 00:03:48.893 03:52:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731901965 00:03:49.155 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731901965_collect-vmstat.pm.log 00:03:49.155 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731901965_collect-cpu-load.pm.log 00:03:50.095 03:52:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.095 03:52:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:50.095 03:52:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.095 03:52:46 -- common/autotest_common.sh@10 -- # set +x 00:03:50.095 03:52:46 -- spdk/autotest.sh@59 -- # create_test_list 00:03:50.095 03:52:46 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:50.095 03:52:46 -- common/autotest_common.sh@10 -- # set +x 00:03:50.095 03:52:46 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:50.095 03:52:46 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:50.095 03:52:46 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:50.095 03:52:46 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:50.095 03:52:46 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:50.095 03:52:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:50.095 03:52:46 -- common/autotest_common.sh@1457 -- # uname 00:03:50.095 03:52:46 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:50.095 03:52:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:50.095 03:52:46 -- common/autotest_common.sh@1477 -- # uname 00:03:50.095 03:52:46 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:50.095 03:52:46 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:50.095 03:52:46 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:50.095 lcov: LCOV version 1.15 00:03:50.095 03:52:46 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:04.988 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:04.988 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:19.884 03:53:15 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:19.884 03:53:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.884 03:53:15 -- common/autotest_common.sh@10 -- # set +x 00:04:19.884 03:53:15 -- spdk/autotest.sh@78 -- # rm -f 00:04:19.884 03:53:15 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.884 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:19.884 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:19.884 03:53:16 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:19.884 03:53:16 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:19.884 03:53:16 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:19.884 03:53:16 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:19.884 03:53:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.884 03:53:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:19.884 03:53:16 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:19.884 03:53:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.884 03:53:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.884 03:53:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.884 03:53:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:19.884 03:53:16 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:19.884 03:53:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:19.884 03:53:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.884 03:53:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.884 03:53:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:19.884 03:53:16 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:19.884 03:53:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:19.884 03:53:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.884 03:53:16 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:19.884 03:53:16 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:19.884 03:53:16 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:19.884 03:53:16 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:19.884 03:53:16 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.884 03:53:16 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:19.884 03:53:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.884 03:53:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.884 03:53:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:19.884 03:53:16 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:19.884 03:53:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:19.884 No valid GPT data, bailing 00:04:19.884 03:53:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.884 03:53:16 -- scripts/common.sh@394 -- # pt= 00:04:19.884 03:53:16 -- scripts/common.sh@395 -- # return 1 00:04:19.884 03:53:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:19.884 1+0 records in 00:04:19.884 1+0 records out 00:04:19.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00651858 s, 161 MB/s 00:04:19.884 03:53:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.884 03:53:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.884 03:53:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:19.884 03:53:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:19.884 03:53:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:19.884 No valid GPT data, bailing 00:04:19.884 03:53:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:20.144 03:53:16 -- scripts/common.sh@394 -- # pt= 00:04:20.144 03:53:16 -- scripts/common.sh@395 -- # return 1 00:04:20.144 03:53:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:20.144 1+0 records in 00:04:20.144 1+0 records out 00:04:20.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431384 s, 243 MB/s 00:04:20.144 03:53:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.144 03:53:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.144 03:53:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:20.144 03:53:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:20.144 03:53:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:20.144 No valid GPT data, bailing 00:04:20.144 03:53:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:20.144 03:53:16 -- scripts/common.sh@394 -- # pt= 00:04:20.144 03:53:16 -- scripts/common.sh@395 -- # return 1 00:04:20.144 03:53:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:20.144 1+0 records in 00:04:20.144 1+0 records out 00:04:20.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557574 s, 188 MB/s 00:04:20.144 03:53:16 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.144 03:53:16 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.144 03:53:16 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:20.144 03:53:16 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:20.144 03:53:16 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:20.144 No valid GPT data, bailing 00:04:20.144 03:53:16 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:20.144 03:53:16 -- scripts/common.sh@394 -- # pt= 00:04:20.144 03:53:16 -- scripts/common.sh@395 -- # return 1 00:04:20.144 03:53:16 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:20.144 1+0 records in 00:04:20.144 1+0 records out 00:04:20.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00660117 s, 159 MB/s 00:04:20.144 03:53:16 -- spdk/autotest.sh@105 -- # sync 00:04:20.144 03:53:16 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:20.144 03:53:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:20.144 03:53:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:23.437 03:53:19 -- spdk/autotest.sh@111 -- # uname -s 00:04:23.437 03:53:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:23.437 03:53:19 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:23.437 03:53:19 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:23.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.697 Hugepages 00:04:23.697 node hugesize free / total 00:04:23.697 node0 1048576kB 0 / 0 00:04:23.697 node0 2048kB 0 / 0 00:04:23.697 00:04:23.697 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:23.957 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:23.957 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:24.217 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:24.217 03:53:20 -- spdk/autotest.sh@117 -- # uname -s 00:04:24.217 03:53:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:24.217 03:53:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:24.217 03:53:20 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.047 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.047 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.047 03:53:21 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:26.428 03:53:22 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:26.428 03:53:22 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:26.428 03:53:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:26.428 03:53:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:26.428 03:53:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:26.428 03:53:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:26.428 03:53:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.428 03:53:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:26.428 03:53:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:26.428 03:53:22 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:26.428 03:53:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:26.428 03:53:22 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.687 Waiting for block devices as requested 00:04:26.687 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.947 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.947 03:53:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.947 03:53:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:26.947 03:53:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:26.947 03:53:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:26.947 03:53:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:26.947 03:53:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:26.947 03:53:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:26.947 03:53:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:26.947 03:53:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:26.947 03:53:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:26.947 03:53:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:26.947 03:53:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.947 03:53:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.947 03:53:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.947 03:53:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.947 03:53:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.947 03:53:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:26.947 03:53:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.947 03:53:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.947 03:53:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.947 03:53:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.947 03:53:23 -- common/autotest_common.sh@1543 -- # continue 00:04:26.947 03:53:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.947 03:53:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:26.947 03:53:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:26.947 03:53:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:26.947 03:53:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:26.947 03:53:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:26.947 03:53:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:26.947 03:53:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:26.947 03:53:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:26.947 03:53:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:26.947 03:53:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:26.947 03:53:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.947 03:53:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.947 03:53:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.947 03:53:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.948 03:53:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.948 03:53:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:26.948 03:53:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.948 03:53:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:27.207 03:53:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:27.207 03:53:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:27.207 03:53:23 -- common/autotest_common.sh@1543 -- # continue 00:04:27.207 03:53:23 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:27.207 03:53:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.207 03:53:23 -- common/autotest_common.sh@10 -- # set +x 00:04:27.207 03:53:23 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:27.207 03:53:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.207 03:53:23 -- common/autotest_common.sh@10 -- # set +x 00:04:27.207 03:53:23 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.148 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.148 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:28.148 03:53:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:28.148 03:53:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.148 03:53:24 -- common/autotest_common.sh@10 -- # set +x 00:04:28.148 03:53:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:28.148 03:53:24 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:28.148 03:53:24 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:28.148 03:53:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:28.148 03:53:24 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:28.148 03:53:24 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:28.148 03:53:24 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:28.148 03:53:24 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:28.148 03:53:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:28.148 03:53:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:28.148 03:53:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.148 03:53:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:28.148 03:53:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:28.409 03:53:24 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:28.409 03:53:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:28.409 03:53:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:28.409 03:53:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:28.409 03:53:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:28.409 03:53:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:28.409 03:53:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:28.409 03:53:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:28.409 03:53:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:28.409 03:53:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:28.409 03:53:24 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:28.409 03:53:24 -- common/autotest_common.sh@1572 -- # return 0 00:04:28.409 03:53:24 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:28.409 03:53:24 -- common/autotest_common.sh@1580 -- # return 0 00:04:28.409 03:53:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:28.409 03:53:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:28.409 03:53:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:28.409 03:53:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:28.409 03:53:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:28.409 03:53:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.409 03:53:24 -- common/autotest_common.sh@10 -- # set +x 00:04:28.409 03:53:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:28.409 03:53:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:28.409 03:53:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.409 03:53:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.409 03:53:24 -- common/autotest_common.sh@10 -- # set +x 00:04:28.409 ************************************ 00:04:28.409 START TEST env 00:04:28.409 ************************************ 00:04:28.409 03:53:24 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:28.409 * Looking for test storage... 00:04:28.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:28.409 03:53:24 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.409 03:53:24 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.409 03:53:24 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.670 03:53:25 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.670 03:53:25 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.670 03:53:25 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.670 03:53:25 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.670 03:53:25 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.670 03:53:25 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.670 03:53:25 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.670 03:53:25 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.670 03:53:25 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.670 03:53:25 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.670 03:53:25 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.670 03:53:25 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.670 03:53:25 env -- scripts/common.sh@344 -- # case "$op" in 00:04:28.670 03:53:25 env -- scripts/common.sh@345 -- # : 1 00:04:28.670 03:53:25 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.670 03:53:25 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.670 03:53:25 env -- scripts/common.sh@365 -- # decimal 1 00:04:28.670 03:53:25 env -- scripts/common.sh@353 -- # local d=1 00:04:28.670 03:53:25 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.670 03:53:25 env -- scripts/common.sh@355 -- # echo 1 00:04:28.670 03:53:25 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.670 03:53:25 env -- scripts/common.sh@366 -- # decimal 2 00:04:28.670 03:53:25 env -- scripts/common.sh@353 -- # local d=2 00:04:28.670 03:53:25 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.670 03:53:25 env -- scripts/common.sh@355 -- # echo 2 00:04:28.670 03:53:25 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.670 03:53:25 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.670 03:53:25 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.670 03:53:25 env -- scripts/common.sh@368 -- # return 0 00:04:28.670 03:53:25 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.670 03:53:25 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.670 --rc genhtml_branch_coverage=1 00:04:28.670 --rc genhtml_function_coverage=1 00:04:28.670 --rc genhtml_legend=1 00:04:28.670 --rc geninfo_all_blocks=1 00:04:28.670 --rc geninfo_unexecuted_blocks=1 00:04:28.670 00:04:28.670 ' 00:04:28.670 03:53:25 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.670 --rc genhtml_branch_coverage=1 00:04:28.670 --rc genhtml_function_coverage=1 00:04:28.670 --rc genhtml_legend=1 00:04:28.670 --rc geninfo_all_blocks=1 00:04:28.670 --rc geninfo_unexecuted_blocks=1 00:04:28.670 00:04:28.670 ' 00:04:28.670 03:53:25 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.670 --rc genhtml_branch_coverage=1 00:04:28.670 --rc genhtml_function_coverage=1 00:04:28.670 --rc genhtml_legend=1 00:04:28.670 --rc geninfo_all_blocks=1 00:04:28.670 --rc geninfo_unexecuted_blocks=1 00:04:28.670 00:04:28.670 ' 00:04:28.670 03:53:25 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.670 --rc genhtml_branch_coverage=1 00:04:28.670 --rc genhtml_function_coverage=1 00:04:28.670 --rc genhtml_legend=1 00:04:28.670 --rc geninfo_all_blocks=1 00:04:28.670 --rc geninfo_unexecuted_blocks=1 00:04:28.670 00:04:28.670 ' 00:04:28.670 03:53:25 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:28.670 03:53:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.670 03:53:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.670 03:53:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.670 ************************************ 00:04:28.670 START TEST env_memory 00:04:28.670 ************************************ 00:04:28.670 03:53:25 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:28.670 00:04:28.670 00:04:28.670 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.670 http://cunit.sourceforge.net/ 00:04:28.670 00:04:28.670 00:04:28.670 Suite: memory 00:04:28.670 Test: alloc and free memory map ...[2024-11-18 03:53:25.149494] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:28.670 passed 00:04:28.670 Test: mem map translation ...[2024-11-18 03:53:25.189841] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:28.670 [2024-11-18 03:53:25.189880] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:28.670 [2024-11-18 03:53:25.189948] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:28.670 [2024-11-18 03:53:25.189964] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:28.670 passed 00:04:28.670 Test: mem map registration ...[2024-11-18 03:53:25.256179] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:28.670 [2024-11-18 03:53:25.256221] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:28.670 passed 00:04:28.930 Test: mem map adjacent registrations ...passed 00:04:28.930 00:04:28.930 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.930 suites 1 1 n/a 0 0 00:04:28.930 tests 4 4 4 0 0 00:04:28.930 asserts 152 152 152 0 n/a 00:04:28.930 00:04:28.930 Elapsed time = 0.230 seconds 00:04:28.930 00:04:28.930 real 0m0.282s 00:04:28.930 user 0m0.240s 00:04:28.930 sys 0m0.030s 00:04:28.930 03:53:25 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.930 03:53:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:28.930 ************************************ 00:04:28.930 END TEST env_memory 00:04:28.930 ************************************ 00:04:28.930 03:53:25 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:28.930 03:53:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.930 03:53:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.930 03:53:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.930 ************************************ 00:04:28.930 START TEST env_vtophys 00:04:28.930 ************************************ 00:04:28.930 03:53:25 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:28.930 EAL: lib.eal log level changed from notice to debug 00:04:28.930 EAL: Detected lcore 0 as core 0 on socket 0 00:04:28.930 EAL: Detected lcore 1 as core 0 on socket 0 00:04:28.930 EAL: Detected lcore 2 as core 0 on socket 0 00:04:28.930 EAL: Detected lcore 3 as core 0 on socket 0 00:04:28.930 EAL: Detected lcore 4 as core 0 on socket 0 00:04:28.930 EAL: Detected lcore 5 as core 0 on socket 0 00:04:28.930 EAL: Detected lcore 6 as core 0 on socket 0 00:04:28.930 EAL: Detected lcore 7 as core 0 on socket 0 00:04:28.930 EAL: Detected lcore 8 as core 0 on socket 0 00:04:28.930 EAL: Detected lcore 9 as core 0 on socket 0 00:04:28.930 EAL: Maximum logical cores by configuration: 128 00:04:28.930 EAL: Detected CPU lcores: 10 00:04:28.930 EAL: Detected NUMA nodes: 1 00:04:28.930 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:28.930 EAL: Detected shared linkage of DPDK 00:04:28.930 EAL: No shared files mode enabled, IPC will be disabled 00:04:28.930 EAL: Selected IOVA mode 'PA' 00:04:28.930 EAL: Probing VFIO support... 00:04:28.930 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:28.930 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:28.930 EAL: Ask a virtual area of 0x2e000 bytes 00:04:28.930 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:28.930 EAL: Setting up physically contiguous memory... 00:04:28.930 EAL: Setting maximum number of open files to 524288 00:04:28.930 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:28.930 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:28.930 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.930 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:28.930 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.930 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.930 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:28.930 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:28.930 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.930 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:28.930 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.930 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.930 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:28.930 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:28.930 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.930 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:28.930 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.930 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.930 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:28.930 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:28.930 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.930 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:28.930 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.930 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.930 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:28.930 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:28.930 EAL: Hugepages will be freed exactly as allocated. 00:04:28.930 EAL: No shared files mode enabled, IPC is disabled 00:04:28.930 EAL: No shared files mode enabled, IPC is disabled 00:04:29.189 EAL: TSC frequency is ~2290000 KHz 00:04:29.189 EAL: Main lcore 0 is ready (tid=7f506df1ca40;cpuset=[0]) 00:04:29.189 EAL: Trying to obtain current memory policy. 00:04:29.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.189 EAL: Restoring previous memory policy: 0 00:04:29.189 EAL: request: mp_malloc_sync 00:04:29.189 EAL: No shared files mode enabled, IPC is disabled 00:04:29.189 EAL: Heap on socket 0 was expanded by 2MB 00:04:29.189 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:29.189 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:29.189 EAL: Mem event callback 'spdk:(nil)' registered 00:04:29.189 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:29.190 00:04:29.190 00:04:29.190 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.190 http://cunit.sourceforge.net/ 00:04:29.190 00:04:29.190 00:04:29.190 Suite: components_suite 00:04:29.759 Test: vtophys_malloc_test ...passed 00:04:29.759 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:29.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.759 EAL: Restoring previous memory policy: 4 00:04:29.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.759 EAL: request: mp_malloc_sync 00:04:29.759 EAL: No shared files mode enabled, IPC is disabled 00:04:29.759 EAL: Heap on socket 0 was expanded by 4MB 00:04:29.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.759 EAL: request: mp_malloc_sync 00:04:29.759 EAL: No shared files mode enabled, IPC is disabled 00:04:29.759 EAL: Heap on socket 0 was shrunk by 4MB 00:04:29.759 EAL: Trying to obtain current memory policy. 00:04:29.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.759 EAL: Restoring previous memory policy: 4 00:04:29.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.759 EAL: request: mp_malloc_sync 00:04:29.759 EAL: No shared files mode enabled, IPC is disabled 00:04:29.759 EAL: Heap on socket 0 was expanded by 6MB 00:04:29.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.759 EAL: request: mp_malloc_sync 00:04:29.759 EAL: No shared files mode enabled, IPC is disabled 00:04:29.759 EAL: Heap on socket 0 was shrunk by 6MB 00:04:29.759 EAL: Trying to obtain current memory policy. 00:04:29.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.759 EAL: Restoring previous memory policy: 4 00:04:29.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.759 EAL: request: mp_malloc_sync 00:04:29.759 EAL: No shared files mode enabled, IPC is disabled 00:04:29.759 EAL: Heap on socket 0 was expanded by 10MB 00:04:29.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.759 EAL: request: mp_malloc_sync 00:04:29.759 EAL: No shared files mode enabled, IPC is disabled 00:04:29.759 EAL: Heap on socket 0 was shrunk by 10MB 00:04:29.759 EAL: Trying to obtain current memory policy. 00:04:29.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.759 EAL: Restoring previous memory policy: 4 00:04:29.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.759 EAL: request: mp_malloc_sync 00:04:29.759 EAL: No shared files mode enabled, IPC is disabled 00:04:29.759 EAL: Heap on socket 0 was expanded by 18MB 00:04:29.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.759 EAL: request: mp_malloc_sync 00:04:29.759 EAL: No shared files mode enabled, IPC is disabled 00:04:29.759 EAL: Heap on socket 0 was shrunk by 18MB 00:04:29.759 EAL: Trying to obtain current memory policy. 00:04:29.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.759 EAL: Restoring previous memory policy: 4 00:04:29.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.759 EAL: request: mp_malloc_sync 00:04:29.759 EAL: No shared files mode enabled, IPC is disabled 00:04:29.759 EAL: Heap on socket 0 was expanded by 34MB 00:04:29.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.759 EAL: request: mp_malloc_sync 00:04:29.759 EAL: No shared files mode enabled, IPC is disabled 00:04:29.759 EAL: Heap on socket 0 was shrunk by 34MB 00:04:29.759 EAL: Trying to obtain current memory policy. 00:04:29.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.759 EAL: Restoring previous memory policy: 4 00:04:29.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.759 EAL: request: mp_malloc_sync 00:04:29.759 EAL: No shared files mode enabled, IPC is disabled 00:04:29.759 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.018 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.018 EAL: request: mp_malloc_sync 00:04:30.019 EAL: No shared files mode enabled, IPC is disabled 00:04:30.019 EAL: Heap on socket 0 was shrunk by 66MB 00:04:30.019 EAL: Trying to obtain current memory policy. 00:04:30.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.019 EAL: Restoring previous memory policy: 4 00:04:30.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.019 EAL: request: mp_malloc_sync 00:04:30.019 EAL: No shared files mode enabled, IPC is disabled 00:04:30.019 EAL: Heap on socket 0 was expanded by 130MB 00:04:30.278 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.538 EAL: request: mp_malloc_sync 00:04:30.538 EAL: No shared files mode enabled, IPC is disabled 00:04:30.538 EAL: Heap on socket 0 was shrunk by 130MB 00:04:30.538 EAL: Trying to obtain current memory policy. 00:04:30.538 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.827 EAL: Restoring previous memory policy: 4 00:04:30.827 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.827 EAL: request: mp_malloc_sync 00:04:30.827 EAL: No shared files mode enabled, IPC is disabled 00:04:30.827 EAL: Heap on socket 0 was expanded by 258MB 00:04:31.101 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.362 EAL: request: mp_malloc_sync 00:04:31.362 EAL: No shared files mode enabled, IPC is disabled 00:04:31.362 EAL: Heap on socket 0 was shrunk by 258MB 00:04:31.622 EAL: Trying to obtain current memory policy. 00:04:31.622 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.882 EAL: Restoring previous memory policy: 4 00:04:31.882 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.882 EAL: request: mp_malloc_sync 00:04:31.882 EAL: No shared files mode enabled, IPC is disabled 00:04:31.882 EAL: Heap on socket 0 was expanded by 514MB 00:04:32.822 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.080 EAL: request: mp_malloc_sync 00:04:33.080 EAL: No shared files mode enabled, IPC is disabled 00:04:33.080 EAL: Heap on socket 0 was shrunk by 514MB 00:04:33.649 EAL: Trying to obtain current memory policy. 00:04:33.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.218 EAL: Restoring previous memory policy: 4 00:04:34.218 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.218 EAL: request: mp_malloc_sync 00:04:34.218 EAL: No shared files mode enabled, IPC is disabled 00:04:34.218 EAL: Heap on socket 0 was expanded by 1026MB 00:04:36.128 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.388 EAL: request: mp_malloc_sync 00:04:36.388 EAL: No shared files mode enabled, IPC is disabled 00:04:36.388 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:38.296 passed 00:04:38.296 00:04:38.296 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.296 suites 1 1 n/a 0 0 00:04:38.296 tests 2 2 2 0 0 00:04:38.296 asserts 5537 5537 5537 0 n/a 00:04:38.296 00:04:38.296 Elapsed time = 8.883 seconds 00:04:38.296 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.296 EAL: request: mp_malloc_sync 00:04:38.296 EAL: No shared files mode enabled, IPC is disabled 00:04:38.296 EAL: Heap on socket 0 was shrunk by 2MB 00:04:38.296 EAL: No shared files mode enabled, IPC is disabled 00:04:38.296 EAL: No shared files mode enabled, IPC is disabled 00:04:38.296 EAL: No shared files mode enabled, IPC is disabled 00:04:38.296 00:04:38.296 real 0m9.210s 00:04:38.296 user 0m7.832s 00:04:38.296 sys 0m1.223s 00:04:38.296 03:53:34 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.296 03:53:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:38.296 ************************************ 00:04:38.296 END TEST env_vtophys 00:04:38.296 ************************************ 00:04:38.296 03:53:34 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.297 03:53:34 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.297 03:53:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.297 03:53:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.297 ************************************ 00:04:38.297 START TEST env_pci 00:04:38.297 ************************************ 00:04:38.297 03:53:34 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.297 00:04:38.297 00:04:38.297 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.297 http://cunit.sourceforge.net/ 00:04:38.297 00:04:38.297 00:04:38.297 Suite: pci 00:04:38.297 Test: pci_hook ...[2024-11-18 03:53:34.747438] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56752 has claimed it 00:04:38.297 passed 00:04:38.297 00:04:38.297 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.297 suites 1 1 n/a 0 0 00:04:38.297 tests 1 1 1 0 0 00:04:38.297 asserts 25 25 25 0 n/a 00:04:38.297 00:04:38.297 Elapsed time = 0.005 seconds 00:04:38.297 EAL: Cannot find device (10000:00:01.0) 00:04:38.297 EAL: Failed to attach device on primary process 00:04:38.297 00:04:38.297 real 0m0.101s 00:04:38.297 user 0m0.045s 00:04:38.297 sys 0m0.055s 00:04:38.297 03:53:34 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.297 03:53:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:38.297 ************************************ 00:04:38.297 END TEST env_pci 00:04:38.297 ************************************ 00:04:38.297 03:53:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:38.297 03:53:34 env -- env/env.sh@15 -- # uname 00:04:38.297 03:53:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:38.297 03:53:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:38.297 03:53:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.297 03:53:34 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:38.297 03:53:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.297 03:53:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.297 ************************************ 00:04:38.297 START TEST env_dpdk_post_init 00:04:38.297 ************************************ 00:04:38.297 03:53:34 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.557 EAL: Detected CPU lcores: 10 00:04:38.557 EAL: Detected NUMA nodes: 1 00:04:38.557 EAL: Detected shared linkage of DPDK 00:04:38.557 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.557 EAL: Selected IOVA mode 'PA' 00:04:38.557 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.557 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:38.557 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:38.557 Starting DPDK initialization... 00:04:38.557 Starting SPDK post initialization... 00:04:38.557 SPDK NVMe probe 00:04:38.557 Attaching to 0000:00:10.0 00:04:38.557 Attaching to 0000:00:11.0 00:04:38.557 Attached to 0000:00:10.0 00:04:38.557 Attached to 0000:00:11.0 00:04:38.557 Cleaning up... 00:04:38.557 00:04:38.557 real 0m0.281s 00:04:38.557 user 0m0.087s 00:04:38.557 sys 0m0.095s 00:04:38.557 03:53:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.557 03:53:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.557 ************************************ 00:04:38.557 END TEST env_dpdk_post_init 00:04:38.557 ************************************ 00:04:38.817 03:53:35 env -- env/env.sh@26 -- # uname 00:04:38.817 03:53:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.817 03:53:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.817 03:53:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.817 03:53:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.817 03:53:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.817 ************************************ 00:04:38.817 START TEST env_mem_callbacks 00:04:38.817 ************************************ 00:04:38.817 03:53:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.817 EAL: Detected CPU lcores: 10 00:04:38.817 EAL: Detected NUMA nodes: 1 00:04:38.817 EAL: Detected shared linkage of DPDK 00:04:38.817 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.817 EAL: Selected IOVA mode 'PA' 00:04:38.817 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.817 00:04:38.817 00:04:38.817 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.817 http://cunit.sourceforge.net/ 00:04:38.817 00:04:38.817 00:04:38.817 Suite: memory 00:04:38.817 Test: test ... 00:04:38.817 register 0x200000200000 2097152 00:04:38.817 malloc 3145728 00:04:38.817 register 0x200000400000 4194304 00:04:38.817 buf 0x2000004fffc0 len 3145728 PASSED 00:04:38.817 malloc 64 00:04:38.817 buf 0x2000004ffec0 len 64 PASSED 00:04:38.817 malloc 4194304 00:04:38.817 register 0x200000800000 6291456 00:04:38.817 buf 0x2000009fffc0 len 4194304 PASSED 00:04:38.817 free 0x2000004fffc0 3145728 00:04:38.817 free 0x2000004ffec0 64 00:04:38.817 unregister 0x200000400000 4194304 PASSED 00:04:38.817 free 0x2000009fffc0 4194304 00:04:38.817 unregister 0x200000800000 6291456 PASSED 00:04:39.077 malloc 8388608 00:04:39.077 register 0x200000400000 10485760 00:04:39.077 buf 0x2000005fffc0 len 8388608 PASSED 00:04:39.077 free 0x2000005fffc0 8388608 00:04:39.077 unregister 0x200000400000 10485760 PASSED 00:04:39.077 passed 00:04:39.077 00:04:39.077 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.077 suites 1 1 n/a 0 0 00:04:39.077 tests 1 1 1 0 0 00:04:39.077 asserts 15 15 15 0 n/a 00:04:39.077 00:04:39.077 Elapsed time = 0.082 seconds 00:04:39.077 00:04:39.077 real 0m0.277s 00:04:39.077 user 0m0.107s 00:04:39.077 sys 0m0.068s 00:04:39.077 03:53:35 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.077 03:53:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:39.077 ************************************ 00:04:39.077 END TEST env_mem_callbacks 00:04:39.077 ************************************ 00:04:39.077 00:04:39.077 real 0m10.714s 00:04:39.077 user 0m8.542s 00:04:39.077 sys 0m1.824s 00:04:39.077 03:53:35 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.077 03:53:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.077 ************************************ 00:04:39.077 END TEST env 00:04:39.077 ************************************ 00:04:39.077 03:53:35 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:39.077 03:53:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.077 03:53:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.077 03:53:35 -- common/autotest_common.sh@10 -- # set +x 00:04:39.077 ************************************ 00:04:39.077 START TEST rpc 00:04:39.077 ************************************ 00:04:39.077 03:53:35 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:39.337 * Looking for test storage... 00:04:39.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.337 03:53:35 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.337 03:53:35 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.337 03:53:35 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.337 03:53:35 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.337 03:53:35 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.337 03:53:35 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.337 03:53:35 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.337 03:53:35 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.337 03:53:35 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.337 03:53:35 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.337 03:53:35 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.337 03:53:35 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.337 03:53:35 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.337 03:53:35 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.337 03:53:35 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.337 03:53:35 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:39.337 03:53:35 rpc -- scripts/common.sh@345 -- # : 1 00:04:39.337 03:53:35 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.338 03:53:35 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.338 03:53:35 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:39.338 03:53:35 rpc -- scripts/common.sh@353 -- # local d=1 00:04:39.338 03:53:35 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.338 03:53:35 rpc -- scripts/common.sh@355 -- # echo 1 00:04:39.338 03:53:35 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.338 03:53:35 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:39.338 03:53:35 rpc -- scripts/common.sh@353 -- # local d=2 00:04:39.338 03:53:35 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.338 03:53:35 rpc -- scripts/common.sh@355 -- # echo 2 00:04:39.338 03:53:35 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.338 03:53:35 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.338 03:53:35 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.338 03:53:35 rpc -- scripts/common.sh@368 -- # return 0 00:04:39.338 03:53:35 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.338 03:53:35 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.338 --rc genhtml_branch_coverage=1 00:04:39.338 --rc genhtml_function_coverage=1 00:04:39.338 --rc genhtml_legend=1 00:04:39.338 --rc geninfo_all_blocks=1 00:04:39.338 --rc geninfo_unexecuted_blocks=1 00:04:39.338 00:04:39.338 ' 00:04:39.338 03:53:35 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.338 --rc genhtml_branch_coverage=1 00:04:39.338 --rc genhtml_function_coverage=1 00:04:39.338 --rc genhtml_legend=1 00:04:39.338 --rc geninfo_all_blocks=1 00:04:39.338 --rc geninfo_unexecuted_blocks=1 00:04:39.338 00:04:39.338 ' 00:04:39.338 03:53:35 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.338 --rc genhtml_branch_coverage=1 00:04:39.338 --rc genhtml_function_coverage=1 00:04:39.338 --rc genhtml_legend=1 00:04:39.338 --rc geninfo_all_blocks=1 00:04:39.338 --rc geninfo_unexecuted_blocks=1 00:04:39.338 00:04:39.338 ' 00:04:39.338 03:53:35 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.338 --rc genhtml_branch_coverage=1 00:04:39.338 --rc genhtml_function_coverage=1 00:04:39.338 --rc genhtml_legend=1 00:04:39.338 --rc geninfo_all_blocks=1 00:04:39.338 --rc geninfo_unexecuted_blocks=1 00:04:39.338 00:04:39.338 ' 00:04:39.338 03:53:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56879 00:04:39.338 03:53:35 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:39.338 03:53:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.338 03:53:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56879 00:04:39.338 03:53:35 rpc -- common/autotest_common.sh@835 -- # '[' -z 56879 ']' 00:04:39.338 03:53:35 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.338 03:53:35 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.338 03:53:35 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.338 03:53:35 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.338 03:53:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.338 [2024-11-18 03:53:35.961388] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:39.338 [2024-11-18 03:53:35.961506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56879 ] 00:04:39.598 [2024-11-18 03:53:36.138600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.858 [2024-11-18 03:53:36.271400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:39.858 [2024-11-18 03:53:36.271464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56879' to capture a snapshot of events at runtime. 00:04:39.858 [2024-11-18 03:53:36.271475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:39.858 [2024-11-18 03:53:36.271486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:39.858 [2024-11-18 03:53:36.271494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56879 for offline analysis/debug. 00:04:39.858 [2024-11-18 03:53:36.272934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.798 03:53:37 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.798 03:53:37 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:40.798 03:53:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.798 03:53:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.798 03:53:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:40.798 03:53:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:40.798 03:53:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.798 03:53:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.798 03:53:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.798 ************************************ 00:04:40.798 START TEST rpc_integrity 00:04:40.798 ************************************ 00:04:40.798 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:40.798 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.798 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.798 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.798 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.798 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.798 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.798 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.798 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.798 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.798 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.798 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.798 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:40.798 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.798 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.798 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.798 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.798 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.798 { 00:04:40.798 "name": "Malloc0", 00:04:40.798 "aliases": [ 00:04:40.798 "16285d5c-44fc-4181-9420-3e5cd13b5f80" 00:04:40.798 ], 00:04:40.798 "product_name": "Malloc disk", 00:04:40.798 "block_size": 512, 00:04:40.798 "num_blocks": 16384, 00:04:40.798 "uuid": "16285d5c-44fc-4181-9420-3e5cd13b5f80", 00:04:40.798 "assigned_rate_limits": { 00:04:40.798 "rw_ios_per_sec": 0, 00:04:40.798 "rw_mbytes_per_sec": 0, 00:04:40.798 "r_mbytes_per_sec": 0, 00:04:40.798 "w_mbytes_per_sec": 0 00:04:40.798 }, 00:04:40.798 "claimed": false, 00:04:40.798 "zoned": false, 00:04:40.798 "supported_io_types": { 00:04:40.798 "read": true, 00:04:40.798 "write": true, 00:04:40.798 "unmap": true, 00:04:40.798 "flush": true, 00:04:40.798 "reset": true, 00:04:40.798 "nvme_admin": false, 00:04:40.798 "nvme_io": false, 00:04:40.798 "nvme_io_md": false, 00:04:40.798 "write_zeroes": true, 00:04:40.798 "zcopy": true, 00:04:40.798 "get_zone_info": false, 00:04:40.798 "zone_management": false, 00:04:40.798 "zone_append": false, 00:04:40.798 "compare": false, 00:04:40.798 "compare_and_write": false, 00:04:40.798 "abort": true, 00:04:40.798 "seek_hole": false, 00:04:40.798 "seek_data": false, 00:04:40.798 "copy": true, 00:04:40.798 "nvme_iov_md": false 00:04:40.798 }, 00:04:40.798 "memory_domains": [ 00:04:40.798 { 00:04:40.798 "dma_device_id": "system", 00:04:40.798 "dma_device_type": 1 00:04:40.798 }, 00:04:40.798 { 00:04:40.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.798 "dma_device_type": 2 00:04:40.798 } 00:04:40.798 ], 00:04:40.798 "driver_specific": {} 00:04:40.798 } 00:04:40.798 ]' 00:04:40.798 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:41.057 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.057 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:41.057 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.057 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.057 [2024-11-18 03:53:37.484892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:41.057 [2024-11-18 03:53:37.484973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.057 [2024-11-18 03:53:37.485007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:41.057 [2024-11-18 03:53:37.485023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.057 [2024-11-18 03:53:37.487729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.057 [2024-11-18 03:53:37.487785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.057 Passthru0 00:04:41.057 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.057 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.057 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.057 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.057 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.057 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.057 { 00:04:41.057 "name": "Malloc0", 00:04:41.057 "aliases": [ 00:04:41.057 "16285d5c-44fc-4181-9420-3e5cd13b5f80" 00:04:41.057 ], 00:04:41.057 "product_name": "Malloc disk", 00:04:41.057 "block_size": 512, 00:04:41.057 "num_blocks": 16384, 00:04:41.057 "uuid": "16285d5c-44fc-4181-9420-3e5cd13b5f80", 00:04:41.057 "assigned_rate_limits": { 00:04:41.057 "rw_ios_per_sec": 0, 00:04:41.057 "rw_mbytes_per_sec": 0, 00:04:41.057 "r_mbytes_per_sec": 0, 00:04:41.057 "w_mbytes_per_sec": 0 00:04:41.057 }, 00:04:41.057 "claimed": true, 00:04:41.057 "claim_type": "exclusive_write", 00:04:41.057 "zoned": false, 00:04:41.057 "supported_io_types": { 00:04:41.057 "read": true, 00:04:41.057 "write": true, 00:04:41.057 "unmap": true, 00:04:41.057 "flush": true, 00:04:41.057 "reset": true, 00:04:41.057 "nvme_admin": false, 00:04:41.057 "nvme_io": false, 00:04:41.057 "nvme_io_md": false, 00:04:41.057 "write_zeroes": true, 00:04:41.057 "zcopy": true, 00:04:41.057 "get_zone_info": false, 00:04:41.057 "zone_management": false, 00:04:41.057 "zone_append": false, 00:04:41.057 "compare": false, 00:04:41.057 "compare_and_write": false, 00:04:41.057 "abort": true, 00:04:41.057 "seek_hole": false, 00:04:41.057 "seek_data": false, 00:04:41.057 "copy": true, 00:04:41.057 "nvme_iov_md": false 00:04:41.057 }, 00:04:41.057 "memory_domains": [ 00:04:41.057 { 00:04:41.057 "dma_device_id": "system", 00:04:41.057 "dma_device_type": 1 00:04:41.057 }, 00:04:41.057 { 00:04:41.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.057 "dma_device_type": 2 00:04:41.057 } 00:04:41.057 ], 00:04:41.057 "driver_specific": {} 00:04:41.057 }, 00:04:41.057 { 00:04:41.057 "name": "Passthru0", 00:04:41.057 "aliases": [ 00:04:41.057 "015c8f52-180f-5fd5-a3a1-811a550160d2" 00:04:41.057 ], 00:04:41.057 "product_name": "passthru", 00:04:41.057 "block_size": 512, 00:04:41.057 "num_blocks": 16384, 00:04:41.057 "uuid": "015c8f52-180f-5fd5-a3a1-811a550160d2", 00:04:41.057 "assigned_rate_limits": { 00:04:41.057 "rw_ios_per_sec": 0, 00:04:41.057 "rw_mbytes_per_sec": 0, 00:04:41.058 "r_mbytes_per_sec": 0, 00:04:41.058 "w_mbytes_per_sec": 0 00:04:41.058 }, 00:04:41.058 "claimed": false, 00:04:41.058 "zoned": false, 00:04:41.058 "supported_io_types": { 00:04:41.058 "read": true, 00:04:41.058 "write": true, 00:04:41.058 "unmap": true, 00:04:41.058 "flush": true, 00:04:41.058 "reset": true, 00:04:41.058 "nvme_admin": false, 00:04:41.058 "nvme_io": false, 00:04:41.058 "nvme_io_md": false, 00:04:41.058 "write_zeroes": true, 00:04:41.058 "zcopy": true, 00:04:41.058 "get_zone_info": false, 00:04:41.058 "zone_management": false, 00:04:41.058 "zone_append": false, 00:04:41.058 "compare": false, 00:04:41.058 "compare_and_write": false, 00:04:41.058 "abort": true, 00:04:41.058 "seek_hole": false, 00:04:41.058 "seek_data": false, 00:04:41.058 "copy": true, 00:04:41.058 "nvme_iov_md": false 00:04:41.058 }, 00:04:41.058 "memory_domains": [ 00:04:41.058 { 00:04:41.058 "dma_device_id": "system", 00:04:41.058 "dma_device_type": 1 00:04:41.058 }, 00:04:41.058 { 00:04:41.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.058 "dma_device_type": 2 00:04:41.058 } 00:04:41.058 ], 00:04:41.058 "driver_specific": { 00:04:41.058 "passthru": { 00:04:41.058 "name": "Passthru0", 00:04:41.058 "base_bdev_name": "Malloc0" 00:04:41.058 } 00:04:41.058 } 00:04:41.058 } 00:04:41.058 ]' 00:04:41.058 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:41.058 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.058 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.058 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.058 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.058 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.058 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:41.058 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.058 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.058 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.058 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.058 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.058 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.058 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.058 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.058 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:41.058 03:53:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.058 00:04:41.058 real 0m0.357s 00:04:41.058 user 0m0.189s 00:04:41.058 sys 0m0.056s 00:04:41.058 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.058 03:53:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.058 ************************************ 00:04:41.058 END TEST rpc_integrity 00:04:41.058 ************************************ 00:04:41.317 03:53:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:41.317 03:53:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.317 03:53:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.317 03:53:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.317 ************************************ 00:04:41.317 START TEST rpc_plugins 00:04:41.317 ************************************ 00:04:41.317 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:41.318 03:53:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.318 03:53:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:41.318 03:53:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.318 03:53:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:41.318 { 00:04:41.318 "name": "Malloc1", 00:04:41.318 "aliases": [ 00:04:41.318 "f4e7b462-1a37-44c5-9a2c-5e014906b0e3" 00:04:41.318 ], 00:04:41.318 "product_name": "Malloc disk", 00:04:41.318 "block_size": 4096, 00:04:41.318 "num_blocks": 256, 00:04:41.318 "uuid": "f4e7b462-1a37-44c5-9a2c-5e014906b0e3", 00:04:41.318 "assigned_rate_limits": { 00:04:41.318 "rw_ios_per_sec": 0, 00:04:41.318 "rw_mbytes_per_sec": 0, 00:04:41.318 "r_mbytes_per_sec": 0, 00:04:41.318 "w_mbytes_per_sec": 0 00:04:41.318 }, 00:04:41.318 "claimed": false, 00:04:41.318 "zoned": false, 00:04:41.318 "supported_io_types": { 00:04:41.318 "read": true, 00:04:41.318 "write": true, 00:04:41.318 "unmap": true, 00:04:41.318 "flush": true, 00:04:41.318 "reset": true, 00:04:41.318 "nvme_admin": false, 00:04:41.318 "nvme_io": false, 00:04:41.318 "nvme_io_md": false, 00:04:41.318 "write_zeroes": true, 00:04:41.318 "zcopy": true, 00:04:41.318 "get_zone_info": false, 00:04:41.318 "zone_management": false, 00:04:41.318 "zone_append": false, 00:04:41.318 "compare": false, 00:04:41.318 "compare_and_write": false, 00:04:41.318 "abort": true, 00:04:41.318 "seek_hole": false, 00:04:41.318 "seek_data": false, 00:04:41.318 "copy": true, 00:04:41.318 "nvme_iov_md": false 00:04:41.318 }, 00:04:41.318 "memory_domains": [ 00:04:41.318 { 00:04:41.318 "dma_device_id": "system", 00:04:41.318 "dma_device_type": 1 00:04:41.318 }, 00:04:41.318 { 00:04:41.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.318 "dma_device_type": 2 00:04:41.318 } 00:04:41.318 ], 00:04:41.318 "driver_specific": {} 00:04:41.318 } 00:04:41.318 ]' 00:04:41.318 03:53:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:41.318 03:53:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:41.318 03:53:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.318 03:53:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.318 03:53:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:41.318 03:53:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:41.318 03:53:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:41.318 00:04:41.318 real 0m0.175s 00:04:41.318 user 0m0.094s 00:04:41.318 sys 0m0.030s 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.318 03:53:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.318 ************************************ 00:04:41.318 END TEST rpc_plugins 00:04:41.318 ************************************ 00:04:41.578 03:53:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:41.578 03:53:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.578 03:53:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.578 03:53:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.578 ************************************ 00:04:41.578 START TEST rpc_trace_cmd_test 00:04:41.578 ************************************ 00:04:41.578 03:53:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:41.578 03:53:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:41.578 03:53:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:41.578 03:53:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.578 03:53:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:41.578 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56879", 00:04:41.578 "tpoint_group_mask": "0x8", 00:04:41.578 "iscsi_conn": { 00:04:41.578 "mask": "0x2", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "scsi": { 00:04:41.578 "mask": "0x4", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "bdev": { 00:04:41.578 "mask": "0x8", 00:04:41.578 "tpoint_mask": "0xffffffffffffffff" 00:04:41.578 }, 00:04:41.578 "nvmf_rdma": { 00:04:41.578 "mask": "0x10", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "nvmf_tcp": { 00:04:41.578 "mask": "0x20", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "ftl": { 00:04:41.578 "mask": "0x40", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "blobfs": { 00:04:41.578 "mask": "0x80", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "dsa": { 00:04:41.578 "mask": "0x200", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "thread": { 00:04:41.578 "mask": "0x400", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "nvme_pcie": { 00:04:41.578 "mask": "0x800", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "iaa": { 00:04:41.578 "mask": "0x1000", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "nvme_tcp": { 00:04:41.578 "mask": "0x2000", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "bdev_nvme": { 00:04:41.578 "mask": "0x4000", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "sock": { 00:04:41.578 "mask": "0x8000", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "blob": { 00:04:41.578 "mask": "0x10000", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "bdev_raid": { 00:04:41.578 "mask": "0x20000", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 }, 00:04:41.578 "scheduler": { 00:04:41.578 "mask": "0x40000", 00:04:41.578 "tpoint_mask": "0x0" 00:04:41.578 } 00:04:41.578 }' 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:41.578 00:04:41.578 real 0m0.236s 00:04:41.578 user 0m0.190s 00:04:41.578 sys 0m0.034s 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.578 03:53:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.578 ************************************ 00:04:41.578 END TEST rpc_trace_cmd_test 00:04:41.578 ************************************ 00:04:41.838 03:53:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:41.838 03:53:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:41.838 03:53:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:41.838 03:53:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.838 03:53:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.838 03:53:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.838 ************************************ 00:04:41.838 START TEST rpc_daemon_integrity 00:04:41.838 ************************************ 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.838 { 00:04:41.838 "name": "Malloc2", 00:04:41.838 "aliases": [ 00:04:41.838 "599a5349-ca09-44eb-accd-483ec1a9d7be" 00:04:41.838 ], 00:04:41.838 "product_name": "Malloc disk", 00:04:41.838 "block_size": 512, 00:04:41.838 "num_blocks": 16384, 00:04:41.838 "uuid": "599a5349-ca09-44eb-accd-483ec1a9d7be", 00:04:41.838 "assigned_rate_limits": { 00:04:41.838 "rw_ios_per_sec": 0, 00:04:41.838 "rw_mbytes_per_sec": 0, 00:04:41.838 "r_mbytes_per_sec": 0, 00:04:41.838 "w_mbytes_per_sec": 0 00:04:41.838 }, 00:04:41.838 "claimed": false, 00:04:41.838 "zoned": false, 00:04:41.838 "supported_io_types": { 00:04:41.838 "read": true, 00:04:41.838 "write": true, 00:04:41.838 "unmap": true, 00:04:41.838 "flush": true, 00:04:41.838 "reset": true, 00:04:41.838 "nvme_admin": false, 00:04:41.838 "nvme_io": false, 00:04:41.838 "nvme_io_md": false, 00:04:41.838 "write_zeroes": true, 00:04:41.838 "zcopy": true, 00:04:41.838 "get_zone_info": false, 00:04:41.838 "zone_management": false, 00:04:41.838 "zone_append": false, 00:04:41.838 "compare": false, 00:04:41.838 "compare_and_write": false, 00:04:41.838 "abort": true, 00:04:41.838 "seek_hole": false, 00:04:41.838 "seek_data": false, 00:04:41.838 "copy": true, 00:04:41.838 "nvme_iov_md": false 00:04:41.838 }, 00:04:41.838 "memory_domains": [ 00:04:41.838 { 00:04:41.838 "dma_device_id": "system", 00:04:41.838 "dma_device_type": 1 00:04:41.838 }, 00:04:41.838 { 00:04:41.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.838 "dma_device_type": 2 00:04:41.838 } 00:04:41.838 ], 00:04:41.838 "driver_specific": {} 00:04:41.838 } 00:04:41.838 ]' 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.838 [2024-11-18 03:53:38.444004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:41.838 [2024-11-18 03:53:38.444069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.838 [2024-11-18 03:53:38.444090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:41.838 [2024-11-18 03:53:38.444102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.838 [2024-11-18 03:53:38.446559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.838 [2024-11-18 03:53:38.446597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.838 Passthru0 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.838 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.098 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.098 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:42.098 { 00:04:42.098 "name": "Malloc2", 00:04:42.098 "aliases": [ 00:04:42.098 "599a5349-ca09-44eb-accd-483ec1a9d7be" 00:04:42.098 ], 00:04:42.098 "product_name": "Malloc disk", 00:04:42.098 "block_size": 512, 00:04:42.098 "num_blocks": 16384, 00:04:42.098 "uuid": "599a5349-ca09-44eb-accd-483ec1a9d7be", 00:04:42.098 "assigned_rate_limits": { 00:04:42.098 "rw_ios_per_sec": 0, 00:04:42.098 "rw_mbytes_per_sec": 0, 00:04:42.098 "r_mbytes_per_sec": 0, 00:04:42.098 "w_mbytes_per_sec": 0 00:04:42.098 }, 00:04:42.098 "claimed": true, 00:04:42.098 "claim_type": "exclusive_write", 00:04:42.098 "zoned": false, 00:04:42.098 "supported_io_types": { 00:04:42.098 "read": true, 00:04:42.098 "write": true, 00:04:42.098 "unmap": true, 00:04:42.098 "flush": true, 00:04:42.098 "reset": true, 00:04:42.098 "nvme_admin": false, 00:04:42.098 "nvme_io": false, 00:04:42.098 "nvme_io_md": false, 00:04:42.098 "write_zeroes": true, 00:04:42.098 "zcopy": true, 00:04:42.098 "get_zone_info": false, 00:04:42.098 "zone_management": false, 00:04:42.098 "zone_append": false, 00:04:42.098 "compare": false, 00:04:42.098 "compare_and_write": false, 00:04:42.098 "abort": true, 00:04:42.098 "seek_hole": false, 00:04:42.098 "seek_data": false, 00:04:42.098 "copy": true, 00:04:42.098 "nvme_iov_md": false 00:04:42.098 }, 00:04:42.098 "memory_domains": [ 00:04:42.098 { 00:04:42.098 "dma_device_id": "system", 00:04:42.098 "dma_device_type": 1 00:04:42.098 }, 00:04:42.098 { 00:04:42.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.098 "dma_device_type": 2 00:04:42.098 } 00:04:42.098 ], 00:04:42.098 "driver_specific": {} 00:04:42.099 }, 00:04:42.099 { 00:04:42.099 "name": "Passthru0", 00:04:42.099 "aliases": [ 00:04:42.099 "d0dbbb17-c5cc-575e-b558-ed748a12b7f1" 00:04:42.099 ], 00:04:42.099 "product_name": "passthru", 00:04:42.099 "block_size": 512, 00:04:42.099 "num_blocks": 16384, 00:04:42.099 "uuid": "d0dbbb17-c5cc-575e-b558-ed748a12b7f1", 00:04:42.099 "assigned_rate_limits": { 00:04:42.099 "rw_ios_per_sec": 0, 00:04:42.099 "rw_mbytes_per_sec": 0, 00:04:42.099 "r_mbytes_per_sec": 0, 00:04:42.099 "w_mbytes_per_sec": 0 00:04:42.099 }, 00:04:42.099 "claimed": false, 00:04:42.099 "zoned": false, 00:04:42.099 "supported_io_types": { 00:04:42.099 "read": true, 00:04:42.099 "write": true, 00:04:42.099 "unmap": true, 00:04:42.099 "flush": true, 00:04:42.099 "reset": true, 00:04:42.099 "nvme_admin": false, 00:04:42.099 "nvme_io": false, 00:04:42.099 "nvme_io_md": false, 00:04:42.099 "write_zeroes": true, 00:04:42.099 "zcopy": true, 00:04:42.099 "get_zone_info": false, 00:04:42.099 "zone_management": false, 00:04:42.099 "zone_append": false, 00:04:42.099 "compare": false, 00:04:42.099 "compare_and_write": false, 00:04:42.099 "abort": true, 00:04:42.099 "seek_hole": false, 00:04:42.099 "seek_data": false, 00:04:42.099 "copy": true, 00:04:42.099 "nvme_iov_md": false 00:04:42.099 }, 00:04:42.099 "memory_domains": [ 00:04:42.099 { 00:04:42.099 "dma_device_id": "system", 00:04:42.099 "dma_device_type": 1 00:04:42.099 }, 00:04:42.099 { 00:04:42.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.099 "dma_device_type": 2 00:04:42.099 } 00:04:42.099 ], 00:04:42.099 "driver_specific": { 00:04:42.099 "passthru": { 00:04:42.099 "name": "Passthru0", 00:04:42.099 "base_bdev_name": "Malloc2" 00:04:42.099 } 00:04:42.099 } 00:04:42.099 } 00:04:42.099 ]' 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:42.099 00:04:42.099 real 0m0.359s 00:04:42.099 user 0m0.203s 00:04:42.099 sys 0m0.050s 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.099 03:53:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.099 ************************************ 00:04:42.099 END TEST rpc_daemon_integrity 00:04:42.099 ************************************ 00:04:42.099 03:53:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:42.099 03:53:38 rpc -- rpc/rpc.sh@84 -- # killprocess 56879 00:04:42.099 03:53:38 rpc -- common/autotest_common.sh@954 -- # '[' -z 56879 ']' 00:04:42.099 03:53:38 rpc -- common/autotest_common.sh@958 -- # kill -0 56879 00:04:42.099 03:53:38 rpc -- common/autotest_common.sh@959 -- # uname 00:04:42.099 03:53:38 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.099 03:53:38 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56879 00:04:42.099 03:53:38 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.099 03:53:38 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.099 killing process with pid 56879 00:04:42.099 03:53:38 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56879' 00:04:42.099 03:53:38 rpc -- common/autotest_common.sh@973 -- # kill 56879 00:04:42.099 03:53:38 rpc -- common/autotest_common.sh@978 -- # wait 56879 00:04:45.410 00:04:45.410 real 0m5.692s 00:04:45.410 user 0m6.007s 00:04:45.410 sys 0m1.121s 00:04:45.410 03:53:41 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.410 ************************************ 00:04:45.410 END TEST rpc 00:04:45.410 ************************************ 00:04:45.410 03:53:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.410 03:53:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:45.410 03:53:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.410 03:53:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.410 03:53:41 -- common/autotest_common.sh@10 -- # set +x 00:04:45.410 ************************************ 00:04:45.410 START TEST skip_rpc 00:04:45.410 ************************************ 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:45.410 * Looking for test storage... 00:04:45.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.410 03:53:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.410 --rc genhtml_branch_coverage=1 00:04:45.410 --rc genhtml_function_coverage=1 00:04:45.410 --rc genhtml_legend=1 00:04:45.410 --rc geninfo_all_blocks=1 00:04:45.410 --rc geninfo_unexecuted_blocks=1 00:04:45.410 00:04:45.410 ' 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.410 --rc genhtml_branch_coverage=1 00:04:45.410 --rc genhtml_function_coverage=1 00:04:45.410 --rc genhtml_legend=1 00:04:45.410 --rc geninfo_all_blocks=1 00:04:45.410 --rc geninfo_unexecuted_blocks=1 00:04:45.410 00:04:45.410 ' 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.410 --rc genhtml_branch_coverage=1 00:04:45.410 --rc genhtml_function_coverage=1 00:04:45.410 --rc genhtml_legend=1 00:04:45.410 --rc geninfo_all_blocks=1 00:04:45.410 --rc geninfo_unexecuted_blocks=1 00:04:45.410 00:04:45.410 ' 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.410 --rc genhtml_branch_coverage=1 00:04:45.410 --rc genhtml_function_coverage=1 00:04:45.410 --rc genhtml_legend=1 00:04:45.410 --rc geninfo_all_blocks=1 00:04:45.410 --rc geninfo_unexecuted_blocks=1 00:04:45.410 00:04:45.410 ' 00:04:45.410 03:53:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.410 03:53:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:45.410 03:53:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.410 03:53:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.410 ************************************ 00:04:45.410 START TEST skip_rpc 00:04:45.410 ************************************ 00:04:45.410 03:53:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:45.410 03:53:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57114 00:04:45.410 03:53:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:45.410 03:53:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.410 03:53:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:45.410 [2024-11-18 03:53:41.731424] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:45.410 [2024-11-18 03:53:41.731547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57114 ] 00:04:45.410 [2024-11-18 03:53:41.911353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.410 [2024-11-18 03:53:42.044367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57114 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57114 ']' 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57114 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57114 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.686 killing process with pid 57114 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57114' 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57114 00:04:50.686 03:53:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57114 00:04:52.593 00:04:52.593 real 0m7.538s 00:04:52.593 user 0m6.895s 00:04:52.593 sys 0m0.560s 00:04:52.593 03:53:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.593 03:53:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.593 ************************************ 00:04:52.593 END TEST skip_rpc 00:04:52.593 ************************************ 00:04:52.593 03:53:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:52.593 03:53:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.593 03:53:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.593 03:53:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.593 ************************************ 00:04:52.593 START TEST skip_rpc_with_json 00:04:52.593 ************************************ 00:04:52.593 03:53:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:52.593 03:53:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:52.851 03:53:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57218 00:04:52.851 03:53:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.851 03:53:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.851 03:53:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57218 00:04:52.851 03:53:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57218 ']' 00:04:52.851 03:53:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.851 03:53:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.851 03:53:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.851 03:53:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.851 03:53:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.851 [2024-11-18 03:53:49.329516] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:52.851 [2024-11-18 03:53:49.329658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57218 ] 00:04:53.110 [2024-11-18 03:53:49.503689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.110 [2024-11-18 03:53:49.610416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.050 [2024-11-18 03:53:50.464289] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:54.050 request: 00:04:54.050 { 00:04:54.050 "trtype": "tcp", 00:04:54.050 "method": "nvmf_get_transports", 00:04:54.050 "req_id": 1 00:04:54.050 } 00:04:54.050 Got JSON-RPC error response 00:04:54.050 response: 00:04:54.050 { 00:04:54.050 "code": -19, 00:04:54.050 "message": "No such device" 00:04:54.050 } 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.050 [2024-11-18 03:53:50.476416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.050 03:53:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.050 { 00:04:54.050 "subsystems": [ 00:04:54.050 { 00:04:54.050 "subsystem": "fsdev", 00:04:54.050 "config": [ 00:04:54.050 { 00:04:54.050 "method": "fsdev_set_opts", 00:04:54.050 "params": { 00:04:54.050 "fsdev_io_pool_size": 65535, 00:04:54.050 "fsdev_io_cache_size": 256 00:04:54.050 } 00:04:54.050 } 00:04:54.050 ] 00:04:54.050 }, 00:04:54.050 { 00:04:54.050 "subsystem": "keyring", 00:04:54.050 "config": [] 00:04:54.050 }, 00:04:54.050 { 00:04:54.050 "subsystem": "iobuf", 00:04:54.050 "config": [ 00:04:54.050 { 00:04:54.050 "method": "iobuf_set_options", 00:04:54.050 "params": { 00:04:54.050 "small_pool_count": 8192, 00:04:54.050 "large_pool_count": 1024, 00:04:54.050 "small_bufsize": 8192, 00:04:54.050 "large_bufsize": 135168, 00:04:54.050 "enable_numa": false 00:04:54.050 } 00:04:54.050 } 00:04:54.050 ] 00:04:54.050 }, 00:04:54.050 { 00:04:54.050 "subsystem": "sock", 00:04:54.050 "config": [ 00:04:54.050 { 00:04:54.050 "method": "sock_set_default_impl", 00:04:54.050 "params": { 00:04:54.050 "impl_name": "posix" 00:04:54.050 } 00:04:54.050 }, 00:04:54.050 { 00:04:54.050 "method": "sock_impl_set_options", 00:04:54.050 "params": { 00:04:54.050 "impl_name": "ssl", 00:04:54.050 "recv_buf_size": 4096, 00:04:54.050 "send_buf_size": 4096, 00:04:54.050 "enable_recv_pipe": true, 00:04:54.050 "enable_quickack": false, 00:04:54.050 "enable_placement_id": 0, 00:04:54.050 "enable_zerocopy_send_server": true, 00:04:54.050 "enable_zerocopy_send_client": false, 00:04:54.050 "zerocopy_threshold": 0, 00:04:54.050 "tls_version": 0, 00:04:54.050 "enable_ktls": false 00:04:54.050 } 00:04:54.050 }, 00:04:54.050 { 00:04:54.050 "method": "sock_impl_set_options", 00:04:54.050 "params": { 00:04:54.050 "impl_name": "posix", 00:04:54.050 "recv_buf_size": 2097152, 00:04:54.050 "send_buf_size": 2097152, 00:04:54.050 "enable_recv_pipe": true, 00:04:54.050 "enable_quickack": false, 00:04:54.050 "enable_placement_id": 0, 00:04:54.050 "enable_zerocopy_send_server": true, 00:04:54.050 "enable_zerocopy_send_client": false, 00:04:54.050 "zerocopy_threshold": 0, 00:04:54.050 "tls_version": 0, 00:04:54.050 "enable_ktls": false 00:04:54.050 } 00:04:54.050 } 00:04:54.050 ] 00:04:54.050 }, 00:04:54.050 { 00:04:54.050 "subsystem": "vmd", 00:04:54.051 "config": [] 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "subsystem": "accel", 00:04:54.051 "config": [ 00:04:54.051 { 00:04:54.051 "method": "accel_set_options", 00:04:54.051 "params": { 00:04:54.051 "small_cache_size": 128, 00:04:54.051 "large_cache_size": 16, 00:04:54.051 "task_count": 2048, 00:04:54.051 "sequence_count": 2048, 00:04:54.051 "buf_count": 2048 00:04:54.051 } 00:04:54.051 } 00:04:54.051 ] 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "subsystem": "bdev", 00:04:54.051 "config": [ 00:04:54.051 { 00:04:54.051 "method": "bdev_set_options", 00:04:54.051 "params": { 00:04:54.051 "bdev_io_pool_size": 65535, 00:04:54.051 "bdev_io_cache_size": 256, 00:04:54.051 "bdev_auto_examine": true, 00:04:54.051 "iobuf_small_cache_size": 128, 00:04:54.051 "iobuf_large_cache_size": 16 00:04:54.051 } 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "method": "bdev_raid_set_options", 00:04:54.051 "params": { 00:04:54.051 "process_window_size_kb": 1024, 00:04:54.051 "process_max_bandwidth_mb_sec": 0 00:04:54.051 } 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "method": "bdev_iscsi_set_options", 00:04:54.051 "params": { 00:04:54.051 "timeout_sec": 30 00:04:54.051 } 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "method": "bdev_nvme_set_options", 00:04:54.051 "params": { 00:04:54.051 "action_on_timeout": "none", 00:04:54.051 "timeout_us": 0, 00:04:54.051 "timeout_admin_us": 0, 00:04:54.051 "keep_alive_timeout_ms": 10000, 00:04:54.051 "arbitration_burst": 0, 00:04:54.051 "low_priority_weight": 0, 00:04:54.051 "medium_priority_weight": 0, 00:04:54.051 "high_priority_weight": 0, 00:04:54.051 "nvme_adminq_poll_period_us": 10000, 00:04:54.051 "nvme_ioq_poll_period_us": 0, 00:04:54.051 "io_queue_requests": 0, 00:04:54.051 "delay_cmd_submit": true, 00:04:54.051 "transport_retry_count": 4, 00:04:54.051 "bdev_retry_count": 3, 00:04:54.051 "transport_ack_timeout": 0, 00:04:54.051 "ctrlr_loss_timeout_sec": 0, 00:04:54.051 "reconnect_delay_sec": 0, 00:04:54.051 "fast_io_fail_timeout_sec": 0, 00:04:54.051 "disable_auto_failback": false, 00:04:54.051 "generate_uuids": false, 00:04:54.051 "transport_tos": 0, 00:04:54.051 "nvme_error_stat": false, 00:04:54.051 "rdma_srq_size": 0, 00:04:54.051 "io_path_stat": false, 00:04:54.051 "allow_accel_sequence": false, 00:04:54.051 "rdma_max_cq_size": 0, 00:04:54.051 "rdma_cm_event_timeout_ms": 0, 00:04:54.051 "dhchap_digests": [ 00:04:54.051 "sha256", 00:04:54.051 "sha384", 00:04:54.051 "sha512" 00:04:54.051 ], 00:04:54.051 "dhchap_dhgroups": [ 00:04:54.051 "null", 00:04:54.051 "ffdhe2048", 00:04:54.051 "ffdhe3072", 00:04:54.051 "ffdhe4096", 00:04:54.051 "ffdhe6144", 00:04:54.051 "ffdhe8192" 00:04:54.051 ] 00:04:54.051 } 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "method": "bdev_nvme_set_hotplug", 00:04:54.051 "params": { 00:04:54.051 "period_us": 100000, 00:04:54.051 "enable": false 00:04:54.051 } 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "method": "bdev_wait_for_examine" 00:04:54.051 } 00:04:54.051 ] 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "subsystem": "scsi", 00:04:54.051 "config": null 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "subsystem": "scheduler", 00:04:54.051 "config": [ 00:04:54.051 { 00:04:54.051 "method": "framework_set_scheduler", 00:04:54.051 "params": { 00:04:54.051 "name": "static" 00:04:54.051 } 00:04:54.051 } 00:04:54.051 ] 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "subsystem": "vhost_scsi", 00:04:54.051 "config": [] 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "subsystem": "vhost_blk", 00:04:54.051 "config": [] 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "subsystem": "ublk", 00:04:54.051 "config": [] 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "subsystem": "nbd", 00:04:54.051 "config": [] 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "subsystem": "nvmf", 00:04:54.051 "config": [ 00:04:54.051 { 00:04:54.051 "method": "nvmf_set_config", 00:04:54.051 "params": { 00:04:54.051 "discovery_filter": "match_any", 00:04:54.051 "admin_cmd_passthru": { 00:04:54.051 "identify_ctrlr": false 00:04:54.051 }, 00:04:54.051 "dhchap_digests": [ 00:04:54.051 "sha256", 00:04:54.051 "sha384", 00:04:54.051 "sha512" 00:04:54.051 ], 00:04:54.051 "dhchap_dhgroups": [ 00:04:54.051 "null", 00:04:54.051 "ffdhe2048", 00:04:54.051 "ffdhe3072", 00:04:54.051 "ffdhe4096", 00:04:54.051 "ffdhe6144", 00:04:54.051 "ffdhe8192" 00:04:54.051 ] 00:04:54.051 } 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "method": "nvmf_set_max_subsystems", 00:04:54.051 "params": { 00:04:54.051 "max_subsystems": 1024 00:04:54.051 } 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "method": "nvmf_set_crdt", 00:04:54.051 "params": { 00:04:54.051 "crdt1": 0, 00:04:54.051 "crdt2": 0, 00:04:54.051 "crdt3": 0 00:04:54.051 } 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "method": "nvmf_create_transport", 00:04:54.051 "params": { 00:04:54.051 "trtype": "TCP", 00:04:54.051 "max_queue_depth": 128, 00:04:54.051 "max_io_qpairs_per_ctrlr": 127, 00:04:54.051 "in_capsule_data_size": 4096, 00:04:54.051 "max_io_size": 131072, 00:04:54.051 "io_unit_size": 131072, 00:04:54.051 "max_aq_depth": 128, 00:04:54.051 "num_shared_buffers": 511, 00:04:54.051 "buf_cache_size": 4294967295, 00:04:54.051 "dif_insert_or_strip": false, 00:04:54.051 "zcopy": false, 00:04:54.051 "c2h_success": true, 00:04:54.051 "sock_priority": 0, 00:04:54.051 "abort_timeout_sec": 1, 00:04:54.051 "ack_timeout": 0, 00:04:54.051 "data_wr_pool_size": 0 00:04:54.051 } 00:04:54.051 } 00:04:54.051 ] 00:04:54.051 }, 00:04:54.051 { 00:04:54.051 "subsystem": "iscsi", 00:04:54.051 "config": [ 00:04:54.051 { 00:04:54.051 "method": "iscsi_set_options", 00:04:54.051 "params": { 00:04:54.051 "node_base": "iqn.2016-06.io.spdk", 00:04:54.051 "max_sessions": 128, 00:04:54.051 "max_connections_per_session": 2, 00:04:54.051 "max_queue_depth": 64, 00:04:54.051 "default_time2wait": 2, 00:04:54.051 "default_time2retain": 20, 00:04:54.051 "first_burst_length": 8192, 00:04:54.051 "immediate_data": true, 00:04:54.051 "allow_duplicated_isid": false, 00:04:54.051 "error_recovery_level": 0, 00:04:54.051 "nop_timeout": 60, 00:04:54.051 "nop_in_interval": 30, 00:04:54.051 "disable_chap": false, 00:04:54.051 "require_chap": false, 00:04:54.051 "mutual_chap": false, 00:04:54.051 "chap_group": 0, 00:04:54.051 "max_large_datain_per_connection": 64, 00:04:54.052 "max_r2t_per_connection": 4, 00:04:54.052 "pdu_pool_size": 36864, 00:04:54.052 "immediate_data_pool_size": 16384, 00:04:54.052 "data_out_pool_size": 2048 00:04:54.052 } 00:04:54.052 } 00:04:54.052 ] 00:04:54.052 } 00:04:54.052 ] 00:04:54.052 } 00:04:54.052 03:53:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:54.052 03:53:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57218 00:04:54.052 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57218 ']' 00:04:54.052 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57218 00:04:54.052 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:54.052 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.052 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57218 00:04:54.311 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.311 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.311 killing process with pid 57218 00:04:54.311 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57218' 00:04:54.311 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57218 00:04:54.311 03:53:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57218 00:04:56.850 03:53:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57274 00:04:56.850 03:53:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.850 03:53:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:02.129 03:53:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57274 00:05:02.129 03:53:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57274 ']' 00:05:02.129 03:53:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57274 00:05:02.129 03:53:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:02.129 03:53:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.129 03:53:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57274 00:05:02.129 03:53:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.129 03:53:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.129 killing process with pid 57274 00:05:02.129 03:53:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57274' 00:05:02.129 03:53:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57274 00:05:02.129 03:53:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57274 00:05:04.036 03:54:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.037 00:05:04.037 real 0m11.100s 00:05:04.037 user 0m10.599s 00:05:04.037 sys 0m0.804s 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.037 ************************************ 00:05:04.037 END TEST skip_rpc_with_json 00:05:04.037 ************************************ 00:05:04.037 03:54:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:04.037 03:54:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.037 03:54:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.037 03:54:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.037 ************************************ 00:05:04.037 START TEST skip_rpc_with_delay 00:05:04.037 ************************************ 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.037 [2024-11-18 03:54:00.505802] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:04.037 00:05:04.037 real 0m0.169s 00:05:04.037 user 0m0.095s 00:05:04.037 sys 0m0.072s 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.037 03:54:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:04.037 ************************************ 00:05:04.037 END TEST skip_rpc_with_delay 00:05:04.037 ************************************ 00:05:04.037 03:54:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:04.037 03:54:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:04.037 03:54:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:04.037 03:54:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.037 03:54:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.037 03:54:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.037 ************************************ 00:05:04.037 START TEST exit_on_failed_rpc_init 00:05:04.037 ************************************ 00:05:04.037 03:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:04.037 03:54:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57402 00:05:04.037 03:54:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.037 03:54:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57402 00:05:04.037 03:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57402 ']' 00:05:04.037 03:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.037 03:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.037 03:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.037 03:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.037 03:54:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.296 [2024-11-18 03:54:00.739384] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:04.296 [2024-11-18 03:54:00.739496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57402 ] 00:05:04.296 [2024-11-18 03:54:00.896367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.555 [2024-11-18 03:54:01.009132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:05.494 03:54:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.494 [2024-11-18 03:54:01.956491] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:05.494 [2024-11-18 03:54:01.956625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57426 ] 00:05:05.494 [2024-11-18 03:54:02.127896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.754 [2024-11-18 03:54:02.242807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.754 [2024-11-18 03:54:02.242915] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:05.754 [2024-11-18 03:54:02.242928] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:05.754 [2024-11-18 03:54:02.242941] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57402 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57402 ']' 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57402 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57402 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.013 killing process with pid 57402 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57402' 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57402 00:05:06.013 03:54:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57402 00:05:08.573 00:05:08.573 real 0m4.187s 00:05:08.573 user 0m4.536s 00:05:08.573 sys 0m0.528s 00:05:08.573 03:54:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.573 03:54:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.573 ************************************ 00:05:08.573 END TEST exit_on_failed_rpc_init 00:05:08.573 ************************************ 00:05:08.573 03:54:04 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:08.573 00:05:08.573 real 0m23.502s 00:05:08.573 user 0m22.315s 00:05:08.573 sys 0m2.295s 00:05:08.573 03:54:04 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.573 03:54:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.573 ************************************ 00:05:08.573 END TEST skip_rpc 00:05:08.573 ************************************ 00:05:08.573 03:54:04 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:08.573 03:54:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.573 03:54:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.573 03:54:04 -- common/autotest_common.sh@10 -- # set +x 00:05:08.573 ************************************ 00:05:08.573 START TEST rpc_client 00:05:08.573 ************************************ 00:05:08.573 03:54:04 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:08.573 * Looking for test storage... 00:05:08.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:08.573 03:54:05 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.573 03:54:05 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.573 03:54:05 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.573 03:54:05 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.574 03:54:05 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:08.574 03:54:05 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.574 03:54:05 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.574 --rc genhtml_branch_coverage=1 00:05:08.574 --rc genhtml_function_coverage=1 00:05:08.574 --rc genhtml_legend=1 00:05:08.574 --rc geninfo_all_blocks=1 00:05:08.574 --rc geninfo_unexecuted_blocks=1 00:05:08.574 00:05:08.574 ' 00:05:08.574 03:54:05 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.574 --rc genhtml_branch_coverage=1 00:05:08.574 --rc genhtml_function_coverage=1 00:05:08.574 --rc genhtml_legend=1 00:05:08.574 --rc geninfo_all_blocks=1 00:05:08.574 --rc geninfo_unexecuted_blocks=1 00:05:08.574 00:05:08.574 ' 00:05:08.574 03:54:05 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.574 --rc genhtml_branch_coverage=1 00:05:08.574 --rc genhtml_function_coverage=1 00:05:08.574 --rc genhtml_legend=1 00:05:08.574 --rc geninfo_all_blocks=1 00:05:08.574 --rc geninfo_unexecuted_blocks=1 00:05:08.574 00:05:08.574 ' 00:05:08.574 03:54:05 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.574 --rc genhtml_branch_coverage=1 00:05:08.574 --rc genhtml_function_coverage=1 00:05:08.574 --rc genhtml_legend=1 00:05:08.574 --rc geninfo_all_blocks=1 00:05:08.574 --rc geninfo_unexecuted_blocks=1 00:05:08.574 00:05:08.574 ' 00:05:08.574 03:54:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:08.833 OK 00:05:08.833 03:54:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:08.833 00:05:08.833 real 0m0.295s 00:05:08.833 user 0m0.160s 00:05:08.833 sys 0m0.152s 00:05:08.833 03:54:05 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.833 03:54:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:08.833 ************************************ 00:05:08.833 END TEST rpc_client 00:05:08.833 ************************************ 00:05:08.833 03:54:05 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:08.833 03:54:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.833 03:54:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.833 03:54:05 -- common/autotest_common.sh@10 -- # set +x 00:05:08.833 ************************************ 00:05:08.833 START TEST json_config 00:05:08.833 ************************************ 00:05:08.834 03:54:05 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:08.834 03:54:05 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.834 03:54:05 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.834 03:54:05 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.834 03:54:05 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.834 03:54:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.834 03:54:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.834 03:54:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.834 03:54:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.834 03:54:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.834 03:54:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.834 03:54:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.834 03:54:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.834 03:54:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.834 03:54:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.834 03:54:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.834 03:54:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:08.834 03:54:05 json_config -- scripts/common.sh@345 -- # : 1 00:05:08.834 03:54:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.834 03:54:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.834 03:54:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:09.094 03:54:05 json_config -- scripts/common.sh@353 -- # local d=1 00:05:09.094 03:54:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.094 03:54:05 json_config -- scripts/common.sh@355 -- # echo 1 00:05:09.094 03:54:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.094 03:54:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:09.094 03:54:05 json_config -- scripts/common.sh@353 -- # local d=2 00:05:09.094 03:54:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.094 03:54:05 json_config -- scripts/common.sh@355 -- # echo 2 00:05:09.094 03:54:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.094 03:54:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.094 03:54:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.094 03:54:05 json_config -- scripts/common.sh@368 -- # return 0 00:05:09.094 03:54:05 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.094 03:54:05 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.094 --rc genhtml_branch_coverage=1 00:05:09.094 --rc genhtml_function_coverage=1 00:05:09.094 --rc genhtml_legend=1 00:05:09.094 --rc geninfo_all_blocks=1 00:05:09.094 --rc geninfo_unexecuted_blocks=1 00:05:09.094 00:05:09.094 ' 00:05:09.094 03:54:05 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.094 --rc genhtml_branch_coverage=1 00:05:09.094 --rc genhtml_function_coverage=1 00:05:09.094 --rc genhtml_legend=1 00:05:09.094 --rc geninfo_all_blocks=1 00:05:09.094 --rc geninfo_unexecuted_blocks=1 00:05:09.094 00:05:09.094 ' 00:05:09.094 03:54:05 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.094 --rc genhtml_branch_coverage=1 00:05:09.094 --rc genhtml_function_coverage=1 00:05:09.094 --rc genhtml_legend=1 00:05:09.094 --rc geninfo_all_blocks=1 00:05:09.094 --rc geninfo_unexecuted_blocks=1 00:05:09.094 00:05:09.094 ' 00:05:09.094 03:54:05 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.094 --rc genhtml_branch_coverage=1 00:05:09.094 --rc genhtml_function_coverage=1 00:05:09.094 --rc genhtml_legend=1 00:05:09.094 --rc geninfo_all_blocks=1 00:05:09.094 --rc geninfo_unexecuted_blocks=1 00:05:09.094 00:05:09.094 ' 00:05:09.094 03:54:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71afe1ad-b1cd-47b1-a3e0-2d96376cb6e9 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=71afe1ad-b1cd-47b1-a3e0-2d96376cb6e9 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:09.094 03:54:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.094 03:54:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.094 03:54:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.094 03:54:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.094 03:54:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.094 03:54:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.094 03:54:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.094 03:54:05 json_config -- paths/export.sh@5 -- # export PATH 00:05:09.094 03:54:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@51 -- # : 0 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.094 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.094 03:54:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.094 03:54:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:09.094 03:54:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:09.094 03:54:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:09.094 03:54:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:09.094 03:54:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:09.094 WARNING: No tests are enabled so not running JSON configuration tests 00:05:09.094 03:54:05 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:09.094 03:54:05 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:09.094 00:05:09.094 real 0m0.222s 00:05:09.094 user 0m0.130s 00:05:09.094 sys 0m0.100s 00:05:09.094 03:54:05 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.094 03:54:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.094 ************************************ 00:05:09.094 END TEST json_config 00:05:09.094 ************************************ 00:05:09.094 03:54:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:09.094 03:54:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.094 03:54:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.094 03:54:05 -- common/autotest_common.sh@10 -- # set +x 00:05:09.094 ************************************ 00:05:09.094 START TEST json_config_extra_key 00:05:09.094 ************************************ 00:05:09.094 03:54:05 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:09.094 03:54:05 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.094 03:54:05 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.094 03:54:05 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.355 03:54:05 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:09.355 03:54:05 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.355 03:54:05 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.355 --rc genhtml_branch_coverage=1 00:05:09.355 --rc genhtml_function_coverage=1 00:05:09.355 --rc genhtml_legend=1 00:05:09.355 --rc geninfo_all_blocks=1 00:05:09.355 --rc geninfo_unexecuted_blocks=1 00:05:09.355 00:05:09.355 ' 00:05:09.355 03:54:05 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.355 --rc genhtml_branch_coverage=1 00:05:09.355 --rc genhtml_function_coverage=1 00:05:09.355 --rc genhtml_legend=1 00:05:09.355 --rc geninfo_all_blocks=1 00:05:09.355 --rc geninfo_unexecuted_blocks=1 00:05:09.355 00:05:09.355 ' 00:05:09.355 03:54:05 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.355 --rc genhtml_branch_coverage=1 00:05:09.355 --rc genhtml_function_coverage=1 00:05:09.355 --rc genhtml_legend=1 00:05:09.355 --rc geninfo_all_blocks=1 00:05:09.355 --rc geninfo_unexecuted_blocks=1 00:05:09.355 00:05:09.355 ' 00:05:09.355 03:54:05 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.355 --rc genhtml_branch_coverage=1 00:05:09.355 --rc genhtml_function_coverage=1 00:05:09.355 --rc genhtml_legend=1 00:05:09.355 --rc geninfo_all_blocks=1 00:05:09.355 --rc geninfo_unexecuted_blocks=1 00:05:09.355 00:05:09.355 ' 00:05:09.355 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71afe1ad-b1cd-47b1-a3e0-2d96376cb6e9 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=71afe1ad-b1cd-47b1-a3e0-2d96376cb6e9 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.355 03:54:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.355 03:54:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.355 03:54:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.356 03:54:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.356 03:54:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.356 03:54:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:09.356 03:54:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.356 03:54:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:09.356 03:54:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:09.356 03:54:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:09.356 03:54:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.356 03:54:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.356 03:54:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.356 03:54:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:09.356 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:09.356 03:54:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:09.356 03:54:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:09.356 03:54:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.356 INFO: launching applications... 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:09.356 03:54:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:09.356 03:54:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:09.356 03:54:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:09.356 03:54:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.356 03:54:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.356 03:54:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.356 03:54:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.356 03:54:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.356 03:54:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57630 00:05:09.356 Waiting for target to run... 00:05:09.356 03:54:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.356 03:54:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57630 /var/tmp/spdk_tgt.sock 00:05:09.356 03:54:05 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:09.356 03:54:05 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57630 ']' 00:05:09.356 03:54:05 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.356 03:54:05 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.356 03:54:05 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.356 03:54:05 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.356 03:54:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.356 [2024-11-18 03:54:05.894686] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:09.356 [2024-11-18 03:54:05.894797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57630 ] 00:05:09.926 [2024-11-18 03:54:06.273397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.926 [2024-11-18 03:54:06.380674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.495 03:54:07 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.495 03:54:07 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:10.495 00:05:10.495 03:54:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:10.495 INFO: shutting down applications... 00:05:10.495 03:54:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:10.495 03:54:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:10.495 03:54:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:10.495 03:54:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.495 03:54:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57630 ]] 00:05:10.495 03:54:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57630 00:05:10.495 03:54:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.495 03:54:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.496 03:54:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57630 00:05:10.496 03:54:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.064 03:54:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.064 03:54:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.064 03:54:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57630 00:05:11.064 03:54:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.632 03:54:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.632 03:54:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.632 03:54:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57630 00:05:11.632 03:54:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.202 03:54:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.202 03:54:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.202 03:54:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57630 00:05:12.202 03:54:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.775 03:54:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.775 03:54:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.775 03:54:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57630 00:05:12.775 03:54:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.036 03:54:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.036 03:54:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.036 03:54:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57630 00:05:13.036 03:54:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.606 03:54:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.606 03:54:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.606 03:54:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57630 00:05:13.606 03:54:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.606 03:54:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:13.606 03:54:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.606 SPDK target shutdown done 00:05:13.606 03:54:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.606 Success 00:05:13.606 03:54:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:13.606 00:05:13.606 real 0m4.549s 00:05:13.606 user 0m3.868s 00:05:13.606 sys 0m0.543s 00:05:13.606 03:54:10 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.606 03:54:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.606 ************************************ 00:05:13.606 END TEST json_config_extra_key 00:05:13.606 ************************************ 00:05:13.606 03:54:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.606 03:54:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.606 03:54:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.606 03:54:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.606 ************************************ 00:05:13.606 START TEST alias_rpc 00:05:13.606 ************************************ 00:05:13.606 03:54:10 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.866 * Looking for test storage... 00:05:13.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.866 03:54:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.866 --rc genhtml_branch_coverage=1 00:05:13.866 --rc genhtml_function_coverage=1 00:05:13.866 --rc genhtml_legend=1 00:05:13.866 --rc geninfo_all_blocks=1 00:05:13.866 --rc geninfo_unexecuted_blocks=1 00:05:13.866 00:05:13.866 ' 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.866 --rc genhtml_branch_coverage=1 00:05:13.866 --rc genhtml_function_coverage=1 00:05:13.866 --rc genhtml_legend=1 00:05:13.866 --rc geninfo_all_blocks=1 00:05:13.866 --rc geninfo_unexecuted_blocks=1 00:05:13.866 00:05:13.866 ' 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.866 --rc genhtml_branch_coverage=1 00:05:13.866 --rc genhtml_function_coverage=1 00:05:13.866 --rc genhtml_legend=1 00:05:13.866 --rc geninfo_all_blocks=1 00:05:13.866 --rc geninfo_unexecuted_blocks=1 00:05:13.866 00:05:13.866 ' 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.866 --rc genhtml_branch_coverage=1 00:05:13.866 --rc genhtml_function_coverage=1 00:05:13.866 --rc genhtml_legend=1 00:05:13.866 --rc geninfo_all_blocks=1 00:05:13.866 --rc geninfo_unexecuted_blocks=1 00:05:13.866 00:05:13.866 ' 00:05:13.866 03:54:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:13.866 03:54:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57746 00:05:13.866 03:54:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.866 03:54:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57746 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57746 ']' 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.866 03:54:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.126 [2024-11-18 03:54:10.516967] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:14.126 [2024-11-18 03:54:10.517103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57746 ] 00:05:14.126 [2024-11-18 03:54:10.684306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.386 [2024-11-18 03:54:10.787858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:15.325 03:54:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:15.325 03:54:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57746 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57746 ']' 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57746 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57746 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.325 killing process with pid 57746 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57746' 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@973 -- # kill 57746 00:05:15.325 03:54:11 alias_rpc -- common/autotest_common.sh@978 -- # wait 57746 00:05:17.864 00:05:17.864 real 0m3.938s 00:05:17.864 user 0m3.933s 00:05:17.864 sys 0m0.554s 00:05:17.864 03:54:14 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.864 03:54:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.864 ************************************ 00:05:17.864 END TEST alias_rpc 00:05:17.864 ************************************ 00:05:17.864 03:54:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:17.864 03:54:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:17.864 03:54:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.864 03:54:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.864 03:54:14 -- common/autotest_common.sh@10 -- # set +x 00:05:17.864 ************************************ 00:05:17.864 START TEST spdkcli_tcp 00:05:17.864 ************************************ 00:05:17.864 03:54:14 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:17.864 * Looking for test storage... 00:05:17.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:17.864 03:54:14 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.864 03:54:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.864 03:54:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.864 03:54:14 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:17.864 03:54:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.865 03:54:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:17.865 03:54:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:17.865 03:54:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.865 03:54:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:17.865 03:54:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.865 03:54:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.865 03:54:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.865 03:54:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.865 --rc genhtml_branch_coverage=1 00:05:17.865 --rc genhtml_function_coverage=1 00:05:17.865 --rc genhtml_legend=1 00:05:17.865 --rc geninfo_all_blocks=1 00:05:17.865 --rc geninfo_unexecuted_blocks=1 00:05:17.865 00:05:17.865 ' 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.865 --rc genhtml_branch_coverage=1 00:05:17.865 --rc genhtml_function_coverage=1 00:05:17.865 --rc genhtml_legend=1 00:05:17.865 --rc geninfo_all_blocks=1 00:05:17.865 --rc geninfo_unexecuted_blocks=1 00:05:17.865 00:05:17.865 ' 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.865 --rc genhtml_branch_coverage=1 00:05:17.865 --rc genhtml_function_coverage=1 00:05:17.865 --rc genhtml_legend=1 00:05:17.865 --rc geninfo_all_blocks=1 00:05:17.865 --rc geninfo_unexecuted_blocks=1 00:05:17.865 00:05:17.865 ' 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.865 --rc genhtml_branch_coverage=1 00:05:17.865 --rc genhtml_function_coverage=1 00:05:17.865 --rc genhtml_legend=1 00:05:17.865 --rc geninfo_all_blocks=1 00:05:17.865 --rc geninfo_unexecuted_blocks=1 00:05:17.865 00:05:17.865 ' 00:05:17.865 03:54:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:17.865 03:54:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:17.865 03:54:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:17.865 03:54:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:17.865 03:54:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:17.865 03:54:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:17.865 03:54:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.865 03:54:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57849 00:05:17.865 03:54:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:17.865 03:54:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57849 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57849 ']' 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.865 03:54:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.124 [2024-11-18 03:54:14.525569] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:18.124 [2024-11-18 03:54:14.525706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57849 ] 00:05:18.124 [2024-11-18 03:54:14.679798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.384 [2024-11-18 03:54:14.811505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.384 [2024-11-18 03:54:14.811541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.323 03:54:15 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.323 03:54:15 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:19.323 03:54:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57870 00:05:19.323 03:54:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:19.323 03:54:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:19.323 [ 00:05:19.323 "bdev_malloc_delete", 00:05:19.323 "bdev_malloc_create", 00:05:19.323 "bdev_null_resize", 00:05:19.323 "bdev_null_delete", 00:05:19.323 "bdev_null_create", 00:05:19.323 "bdev_nvme_cuse_unregister", 00:05:19.323 "bdev_nvme_cuse_register", 00:05:19.323 "bdev_opal_new_user", 00:05:19.323 "bdev_opal_set_lock_state", 00:05:19.323 "bdev_opal_delete", 00:05:19.323 "bdev_opal_get_info", 00:05:19.323 "bdev_opal_create", 00:05:19.323 "bdev_nvme_opal_revert", 00:05:19.323 "bdev_nvme_opal_init", 00:05:19.323 "bdev_nvme_send_cmd", 00:05:19.323 "bdev_nvme_set_keys", 00:05:19.323 "bdev_nvme_get_path_iostat", 00:05:19.323 "bdev_nvme_get_mdns_discovery_info", 00:05:19.323 "bdev_nvme_stop_mdns_discovery", 00:05:19.323 "bdev_nvme_start_mdns_discovery", 00:05:19.323 "bdev_nvme_set_multipath_policy", 00:05:19.323 "bdev_nvme_set_preferred_path", 00:05:19.323 "bdev_nvme_get_io_paths", 00:05:19.323 "bdev_nvme_remove_error_injection", 00:05:19.323 "bdev_nvme_add_error_injection", 00:05:19.323 "bdev_nvme_get_discovery_info", 00:05:19.323 "bdev_nvme_stop_discovery", 00:05:19.323 "bdev_nvme_start_discovery", 00:05:19.323 "bdev_nvme_get_controller_health_info", 00:05:19.323 "bdev_nvme_disable_controller", 00:05:19.323 "bdev_nvme_enable_controller", 00:05:19.323 "bdev_nvme_reset_controller", 00:05:19.323 "bdev_nvme_get_transport_statistics", 00:05:19.323 "bdev_nvme_apply_firmware", 00:05:19.323 "bdev_nvme_detach_controller", 00:05:19.323 "bdev_nvme_get_controllers", 00:05:19.323 "bdev_nvme_attach_controller", 00:05:19.323 "bdev_nvme_set_hotplug", 00:05:19.323 "bdev_nvme_set_options", 00:05:19.323 "bdev_passthru_delete", 00:05:19.323 "bdev_passthru_create", 00:05:19.323 "bdev_lvol_set_parent_bdev", 00:05:19.323 "bdev_lvol_set_parent", 00:05:19.323 "bdev_lvol_check_shallow_copy", 00:05:19.323 "bdev_lvol_start_shallow_copy", 00:05:19.323 "bdev_lvol_grow_lvstore", 00:05:19.323 "bdev_lvol_get_lvols", 00:05:19.323 "bdev_lvol_get_lvstores", 00:05:19.323 "bdev_lvol_delete", 00:05:19.323 "bdev_lvol_set_read_only", 00:05:19.323 "bdev_lvol_resize", 00:05:19.323 "bdev_lvol_decouple_parent", 00:05:19.323 "bdev_lvol_inflate", 00:05:19.323 "bdev_lvol_rename", 00:05:19.323 "bdev_lvol_clone_bdev", 00:05:19.323 "bdev_lvol_clone", 00:05:19.323 "bdev_lvol_snapshot", 00:05:19.323 "bdev_lvol_create", 00:05:19.323 "bdev_lvol_delete_lvstore", 00:05:19.323 "bdev_lvol_rename_lvstore", 00:05:19.323 "bdev_lvol_create_lvstore", 00:05:19.323 "bdev_raid_set_options", 00:05:19.323 "bdev_raid_remove_base_bdev", 00:05:19.323 "bdev_raid_add_base_bdev", 00:05:19.323 "bdev_raid_delete", 00:05:19.323 "bdev_raid_create", 00:05:19.323 "bdev_raid_get_bdevs", 00:05:19.323 "bdev_error_inject_error", 00:05:19.323 "bdev_error_delete", 00:05:19.323 "bdev_error_create", 00:05:19.323 "bdev_split_delete", 00:05:19.323 "bdev_split_create", 00:05:19.323 "bdev_delay_delete", 00:05:19.323 "bdev_delay_create", 00:05:19.323 "bdev_delay_update_latency", 00:05:19.323 "bdev_zone_block_delete", 00:05:19.323 "bdev_zone_block_create", 00:05:19.323 "blobfs_create", 00:05:19.323 "blobfs_detect", 00:05:19.323 "blobfs_set_cache_size", 00:05:19.323 "bdev_aio_delete", 00:05:19.323 "bdev_aio_rescan", 00:05:19.323 "bdev_aio_create", 00:05:19.323 "bdev_ftl_set_property", 00:05:19.324 "bdev_ftl_get_properties", 00:05:19.324 "bdev_ftl_get_stats", 00:05:19.324 "bdev_ftl_unmap", 00:05:19.324 "bdev_ftl_unload", 00:05:19.324 "bdev_ftl_delete", 00:05:19.324 "bdev_ftl_load", 00:05:19.324 "bdev_ftl_create", 00:05:19.324 "bdev_virtio_attach_controller", 00:05:19.324 "bdev_virtio_scsi_get_devices", 00:05:19.324 "bdev_virtio_detach_controller", 00:05:19.324 "bdev_virtio_blk_set_hotplug", 00:05:19.324 "bdev_iscsi_delete", 00:05:19.324 "bdev_iscsi_create", 00:05:19.324 "bdev_iscsi_set_options", 00:05:19.324 "accel_error_inject_error", 00:05:19.324 "ioat_scan_accel_module", 00:05:19.324 "dsa_scan_accel_module", 00:05:19.324 "iaa_scan_accel_module", 00:05:19.324 "keyring_file_remove_key", 00:05:19.324 "keyring_file_add_key", 00:05:19.324 "keyring_linux_set_options", 00:05:19.324 "fsdev_aio_delete", 00:05:19.324 "fsdev_aio_create", 00:05:19.324 "iscsi_get_histogram", 00:05:19.324 "iscsi_enable_histogram", 00:05:19.324 "iscsi_set_options", 00:05:19.324 "iscsi_get_auth_groups", 00:05:19.324 "iscsi_auth_group_remove_secret", 00:05:19.324 "iscsi_auth_group_add_secret", 00:05:19.324 "iscsi_delete_auth_group", 00:05:19.324 "iscsi_create_auth_group", 00:05:19.324 "iscsi_set_discovery_auth", 00:05:19.324 "iscsi_get_options", 00:05:19.324 "iscsi_target_node_request_logout", 00:05:19.324 "iscsi_target_node_set_redirect", 00:05:19.324 "iscsi_target_node_set_auth", 00:05:19.324 "iscsi_target_node_add_lun", 00:05:19.324 "iscsi_get_stats", 00:05:19.324 "iscsi_get_connections", 00:05:19.324 "iscsi_portal_group_set_auth", 00:05:19.324 "iscsi_start_portal_group", 00:05:19.324 "iscsi_delete_portal_group", 00:05:19.324 "iscsi_create_portal_group", 00:05:19.324 "iscsi_get_portal_groups", 00:05:19.324 "iscsi_delete_target_node", 00:05:19.324 "iscsi_target_node_remove_pg_ig_maps", 00:05:19.324 "iscsi_target_node_add_pg_ig_maps", 00:05:19.324 "iscsi_create_target_node", 00:05:19.324 "iscsi_get_target_nodes", 00:05:19.324 "iscsi_delete_initiator_group", 00:05:19.324 "iscsi_initiator_group_remove_initiators", 00:05:19.324 "iscsi_initiator_group_add_initiators", 00:05:19.324 "iscsi_create_initiator_group", 00:05:19.324 "iscsi_get_initiator_groups", 00:05:19.324 "nvmf_set_crdt", 00:05:19.324 "nvmf_set_config", 00:05:19.324 "nvmf_set_max_subsystems", 00:05:19.324 "nvmf_stop_mdns_prr", 00:05:19.324 "nvmf_publish_mdns_prr", 00:05:19.324 "nvmf_subsystem_get_listeners", 00:05:19.324 "nvmf_subsystem_get_qpairs", 00:05:19.324 "nvmf_subsystem_get_controllers", 00:05:19.324 "nvmf_get_stats", 00:05:19.324 "nvmf_get_transports", 00:05:19.324 "nvmf_create_transport", 00:05:19.324 "nvmf_get_targets", 00:05:19.324 "nvmf_delete_target", 00:05:19.324 "nvmf_create_target", 00:05:19.324 "nvmf_subsystem_allow_any_host", 00:05:19.324 "nvmf_subsystem_set_keys", 00:05:19.324 "nvmf_subsystem_remove_host", 00:05:19.324 "nvmf_subsystem_add_host", 00:05:19.324 "nvmf_ns_remove_host", 00:05:19.324 "nvmf_ns_add_host", 00:05:19.324 "nvmf_subsystem_remove_ns", 00:05:19.324 "nvmf_subsystem_set_ns_ana_group", 00:05:19.324 "nvmf_subsystem_add_ns", 00:05:19.324 "nvmf_subsystem_listener_set_ana_state", 00:05:19.324 "nvmf_discovery_get_referrals", 00:05:19.324 "nvmf_discovery_remove_referral", 00:05:19.324 "nvmf_discovery_add_referral", 00:05:19.324 "nvmf_subsystem_remove_listener", 00:05:19.324 "nvmf_subsystem_add_listener", 00:05:19.324 "nvmf_delete_subsystem", 00:05:19.324 "nvmf_create_subsystem", 00:05:19.324 "nvmf_get_subsystems", 00:05:19.324 "env_dpdk_get_mem_stats", 00:05:19.324 "nbd_get_disks", 00:05:19.324 "nbd_stop_disk", 00:05:19.324 "nbd_start_disk", 00:05:19.324 "ublk_recover_disk", 00:05:19.324 "ublk_get_disks", 00:05:19.324 "ublk_stop_disk", 00:05:19.324 "ublk_start_disk", 00:05:19.324 "ublk_destroy_target", 00:05:19.324 "ublk_create_target", 00:05:19.324 "virtio_blk_create_transport", 00:05:19.324 "virtio_blk_get_transports", 00:05:19.324 "vhost_controller_set_coalescing", 00:05:19.324 "vhost_get_controllers", 00:05:19.324 "vhost_delete_controller", 00:05:19.324 "vhost_create_blk_controller", 00:05:19.324 "vhost_scsi_controller_remove_target", 00:05:19.324 "vhost_scsi_controller_add_target", 00:05:19.324 "vhost_start_scsi_controller", 00:05:19.324 "vhost_create_scsi_controller", 00:05:19.324 "thread_set_cpumask", 00:05:19.324 "scheduler_set_options", 00:05:19.324 "framework_get_governor", 00:05:19.324 "framework_get_scheduler", 00:05:19.324 "framework_set_scheduler", 00:05:19.324 "framework_get_reactors", 00:05:19.324 "thread_get_io_channels", 00:05:19.324 "thread_get_pollers", 00:05:19.324 "thread_get_stats", 00:05:19.324 "framework_monitor_context_switch", 00:05:19.324 "spdk_kill_instance", 00:05:19.324 "log_enable_timestamps", 00:05:19.324 "log_get_flags", 00:05:19.324 "log_clear_flag", 00:05:19.324 "log_set_flag", 00:05:19.324 "log_get_level", 00:05:19.324 "log_set_level", 00:05:19.324 "log_get_print_level", 00:05:19.324 "log_set_print_level", 00:05:19.324 "framework_enable_cpumask_locks", 00:05:19.324 "framework_disable_cpumask_locks", 00:05:19.324 "framework_wait_init", 00:05:19.324 "framework_start_init", 00:05:19.324 "scsi_get_devices", 00:05:19.324 "bdev_get_histogram", 00:05:19.324 "bdev_enable_histogram", 00:05:19.324 "bdev_set_qos_limit", 00:05:19.324 "bdev_set_qd_sampling_period", 00:05:19.324 "bdev_get_bdevs", 00:05:19.324 "bdev_reset_iostat", 00:05:19.324 "bdev_get_iostat", 00:05:19.324 "bdev_examine", 00:05:19.324 "bdev_wait_for_examine", 00:05:19.324 "bdev_set_options", 00:05:19.324 "accel_get_stats", 00:05:19.324 "accel_set_options", 00:05:19.324 "accel_set_driver", 00:05:19.324 "accel_crypto_key_destroy", 00:05:19.324 "accel_crypto_keys_get", 00:05:19.324 "accel_crypto_key_create", 00:05:19.324 "accel_assign_opc", 00:05:19.324 "accel_get_module_info", 00:05:19.324 "accel_get_opc_assignments", 00:05:19.324 "vmd_rescan", 00:05:19.324 "vmd_remove_device", 00:05:19.324 "vmd_enable", 00:05:19.324 "sock_get_default_impl", 00:05:19.324 "sock_set_default_impl", 00:05:19.324 "sock_impl_set_options", 00:05:19.324 "sock_impl_get_options", 00:05:19.324 "iobuf_get_stats", 00:05:19.324 "iobuf_set_options", 00:05:19.324 "keyring_get_keys", 00:05:19.324 "framework_get_pci_devices", 00:05:19.324 "framework_get_config", 00:05:19.324 "framework_get_subsystems", 00:05:19.324 "fsdev_set_opts", 00:05:19.324 "fsdev_get_opts", 00:05:19.324 "trace_get_info", 00:05:19.324 "trace_get_tpoint_group_mask", 00:05:19.324 "trace_disable_tpoint_group", 00:05:19.324 "trace_enable_tpoint_group", 00:05:19.324 "trace_clear_tpoint_mask", 00:05:19.324 "trace_set_tpoint_mask", 00:05:19.324 "notify_get_notifications", 00:05:19.324 "notify_get_types", 00:05:19.324 "spdk_get_version", 00:05:19.324 "rpc_get_methods" 00:05:19.324 ] 00:05:19.324 03:54:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:19.324 03:54:15 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.324 03:54:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.584 03:54:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:19.584 03:54:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57849 00:05:19.584 03:54:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57849 ']' 00:05:19.584 03:54:15 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57849 00:05:19.584 03:54:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:19.584 03:54:15 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.584 03:54:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57849 00:05:19.584 03:54:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.584 03:54:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.584 killing process with pid 57849 00:05:19.584 03:54:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57849' 00:05:19.584 03:54:16 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57849 00:05:19.584 03:54:16 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57849 00:05:22.159 00:05:22.159 real 0m4.375s 00:05:22.159 user 0m7.820s 00:05:22.159 sys 0m0.619s 00:05:22.159 03:54:18 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.159 03:54:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.159 ************************************ 00:05:22.159 END TEST spdkcli_tcp 00:05:22.159 ************************************ 00:05:22.159 03:54:18 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.159 03:54:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.159 03:54:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.160 03:54:18 -- common/autotest_common.sh@10 -- # set +x 00:05:22.160 ************************************ 00:05:22.160 START TEST dpdk_mem_utility 00:05:22.160 ************************************ 00:05:22.160 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.160 * Looking for test storage... 00:05:22.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:22.160 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.160 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.160 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.419 03:54:18 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.419 --rc genhtml_branch_coverage=1 00:05:22.419 --rc genhtml_function_coverage=1 00:05:22.419 --rc genhtml_legend=1 00:05:22.419 --rc geninfo_all_blocks=1 00:05:22.419 --rc geninfo_unexecuted_blocks=1 00:05:22.419 00:05:22.419 ' 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.419 --rc genhtml_branch_coverage=1 00:05:22.419 --rc genhtml_function_coverage=1 00:05:22.419 --rc genhtml_legend=1 00:05:22.419 --rc geninfo_all_blocks=1 00:05:22.419 --rc geninfo_unexecuted_blocks=1 00:05:22.419 00:05:22.419 ' 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.419 --rc genhtml_branch_coverage=1 00:05:22.419 --rc genhtml_function_coverage=1 00:05:22.419 --rc genhtml_legend=1 00:05:22.419 --rc geninfo_all_blocks=1 00:05:22.419 --rc geninfo_unexecuted_blocks=1 00:05:22.419 00:05:22.419 ' 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.419 --rc genhtml_branch_coverage=1 00:05:22.419 --rc genhtml_function_coverage=1 00:05:22.419 --rc genhtml_legend=1 00:05:22.419 --rc geninfo_all_blocks=1 00:05:22.419 --rc geninfo_unexecuted_blocks=1 00:05:22.419 00:05:22.419 ' 00:05:22.419 03:54:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:22.419 03:54:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.419 03:54:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57975 00:05:22.419 03:54:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57975 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57975 ']' 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.419 03:54:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.419 [2024-11-18 03:54:18.955421] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:22.419 [2024-11-18 03:54:18.955555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57975 ] 00:05:22.687 [2024-11-18 03:54:19.128909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.687 [2024-11-18 03:54:19.261258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.739 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.739 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:23.739 03:54:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:23.739 03:54:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:23.739 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.739 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:23.739 { 00:05:23.739 "filename": "/tmp/spdk_mem_dump.txt" 00:05:23.739 } 00:05:23.739 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.739 03:54:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:23.739 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:23.739 1 heaps totaling size 816.000000 MiB 00:05:23.739 size: 816.000000 MiB heap id: 0 00:05:23.739 end heaps---------- 00:05:23.739 9 mempools totaling size 595.772034 MiB 00:05:23.739 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:23.739 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:23.739 size: 92.545471 MiB name: bdev_io_57975 00:05:23.739 size: 50.003479 MiB name: msgpool_57975 00:05:23.739 size: 36.509338 MiB name: fsdev_io_57975 00:05:23.739 size: 21.763794 MiB name: PDU_Pool 00:05:23.739 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:23.739 size: 4.133484 MiB name: evtpool_57975 00:05:23.739 size: 0.026123 MiB name: Session_Pool 00:05:23.739 end mempools------- 00:05:23.739 6 memzones totaling size 4.142822 MiB 00:05:23.739 size: 1.000366 MiB name: RG_ring_0_57975 00:05:23.739 size: 1.000366 MiB name: RG_ring_1_57975 00:05:23.739 size: 1.000366 MiB name: RG_ring_4_57975 00:05:23.739 size: 1.000366 MiB name: RG_ring_5_57975 00:05:23.739 size: 0.125366 MiB name: RG_ring_2_57975 00:05:23.739 size: 0.015991 MiB name: RG_ring_3_57975 00:05:23.739 end memzones------- 00:05:23.739 03:54:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:23.739 heap id: 0 total size: 816.000000 MiB number of busy elements: 305 number of free elements: 18 00:05:23.739 list of free elements. size: 16.793823 MiB 00:05:23.739 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:23.739 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:23.739 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:23.739 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:23.739 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:23.739 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:23.739 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:23.739 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:23.739 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:23.739 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:23.739 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:23.739 element at address: 0x20001ac00000 with size: 0.564148 MiB 00:05:23.739 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:23.739 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:23.739 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:23.739 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:23.739 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:23.739 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:23.739 list of standard malloc elements. size: 199.285278 MiB 00:05:23.739 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:23.739 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:23.739 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:23.739 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:23.739 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:23.739 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:23.739 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:23.739 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:23.739 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:23.739 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:23.739 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:23.739 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:23.739 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:23.740 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:23.741 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:23.741 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:23.741 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:23.741 list of memzone associated elements. size: 599.920898 MiB 00:05:23.741 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:23.741 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:23.741 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:23.741 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:23.741 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:23.741 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57975_0 00:05:23.741 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:23.741 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57975_0 00:05:23.741 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:23.741 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57975_0 00:05:23.741 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:23.741 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:23.741 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:23.741 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:23.741 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:23.741 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57975_0 00:05:23.741 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:23.741 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57975 00:05:23.741 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:23.741 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57975 00:05:23.741 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:23.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:23.742 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:23.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:23.742 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:23.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:23.742 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:23.742 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:23.742 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:23.742 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57975 00:05:23.742 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:23.742 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57975 00:05:23.742 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:23.742 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57975 00:05:23.742 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:23.742 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57975 00:05:23.742 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:23.742 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57975 00:05:23.742 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:23.742 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57975 00:05:23.742 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:23.742 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:23.742 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:23.742 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:23.742 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:23.742 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:23.742 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:23.742 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57975 00:05:23.742 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:23.742 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57975 00:05:23.742 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:23.742 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:23.742 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:23.742 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:23.742 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:23.742 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57975 00:05:23.742 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:23.742 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:23.742 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:23.742 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57975 00:05:23.742 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:23.742 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57975 00:05:23.742 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:23.742 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57975 00:05:23.742 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:23.742 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:23.742 03:54:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:23.742 03:54:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57975 00:05:23.742 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57975 ']' 00:05:23.742 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57975 00:05:23.742 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:23.742 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.742 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57975 00:05:23.742 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.742 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.742 killing process with pid 57975 00:05:23.742 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57975' 00:05:23.742 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57975 00:05:23.742 03:54:20 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57975 00:05:26.280 00:05:26.280 real 0m4.269s 00:05:26.280 user 0m3.991s 00:05:26.280 sys 0m0.692s 00:05:26.280 03:54:22 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.280 03:54:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.280 ************************************ 00:05:26.280 END TEST dpdk_mem_utility 00:05:26.280 ************************************ 00:05:26.540 03:54:22 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:26.540 03:54:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.540 03:54:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.540 03:54:22 -- common/autotest_common.sh@10 -- # set +x 00:05:26.540 ************************************ 00:05:26.540 START TEST event 00:05:26.540 ************************************ 00:05:26.540 03:54:22 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:26.540 * Looking for test storage... 00:05:26.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:26.540 03:54:23 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.540 03:54:23 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.540 03:54:23 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.540 03:54:23 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.540 03:54:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.540 03:54:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.540 03:54:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.540 03:54:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.540 03:54:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.540 03:54:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.540 03:54:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.540 03:54:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.540 03:54:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.540 03:54:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.540 03:54:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.541 03:54:23 event -- scripts/common.sh@344 -- # case "$op" in 00:05:26.541 03:54:23 event -- scripts/common.sh@345 -- # : 1 00:05:26.541 03:54:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.541 03:54:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.800 03:54:23 event -- scripts/common.sh@365 -- # decimal 1 00:05:26.800 03:54:23 event -- scripts/common.sh@353 -- # local d=1 00:05:26.800 03:54:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.800 03:54:23 event -- scripts/common.sh@355 -- # echo 1 00:05:26.800 03:54:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.800 03:54:23 event -- scripts/common.sh@366 -- # decimal 2 00:05:26.800 03:54:23 event -- scripts/common.sh@353 -- # local d=2 00:05:26.800 03:54:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.800 03:54:23 event -- scripts/common.sh@355 -- # echo 2 00:05:26.800 03:54:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.800 03:54:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.800 03:54:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.800 03:54:23 event -- scripts/common.sh@368 -- # return 0 00:05:26.800 03:54:23 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.800 03:54:23 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.800 --rc genhtml_branch_coverage=1 00:05:26.800 --rc genhtml_function_coverage=1 00:05:26.800 --rc genhtml_legend=1 00:05:26.800 --rc geninfo_all_blocks=1 00:05:26.800 --rc geninfo_unexecuted_blocks=1 00:05:26.800 00:05:26.800 ' 00:05:26.800 03:54:23 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.800 --rc genhtml_branch_coverage=1 00:05:26.800 --rc genhtml_function_coverage=1 00:05:26.800 --rc genhtml_legend=1 00:05:26.800 --rc geninfo_all_blocks=1 00:05:26.800 --rc geninfo_unexecuted_blocks=1 00:05:26.800 00:05:26.800 ' 00:05:26.800 03:54:23 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.800 --rc genhtml_branch_coverage=1 00:05:26.800 --rc genhtml_function_coverage=1 00:05:26.800 --rc genhtml_legend=1 00:05:26.800 --rc geninfo_all_blocks=1 00:05:26.800 --rc geninfo_unexecuted_blocks=1 00:05:26.800 00:05:26.801 ' 00:05:26.801 03:54:23 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.801 --rc genhtml_branch_coverage=1 00:05:26.801 --rc genhtml_function_coverage=1 00:05:26.801 --rc genhtml_legend=1 00:05:26.801 --rc geninfo_all_blocks=1 00:05:26.801 --rc geninfo_unexecuted_blocks=1 00:05:26.801 00:05:26.801 ' 00:05:26.801 03:54:23 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:26.801 03:54:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:26.801 03:54:23 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.801 03:54:23 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:26.801 03:54:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.801 03:54:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.801 ************************************ 00:05:26.801 START TEST event_perf 00:05:26.801 ************************************ 00:05:26.801 03:54:23 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.801 Running I/O for 1 seconds...[2024-11-18 03:54:23.261251] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:26.801 [2024-11-18 03:54:23.261367] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58084 ] 00:05:26.801 [2024-11-18 03:54:23.435697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.060 [2024-11-18 03:54:23.580463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.060 [2024-11-18 03:54:23.580646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.060 Running I/O for 1 seconds...[2024-11-18 03:54:23.580797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.060 [2024-11-18 03:54:23.580927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.442 00:05:28.442 lcore 0: 97212 00:05:28.442 lcore 1: 97215 00:05:28.442 lcore 2: 97212 00:05:28.442 lcore 3: 97209 00:05:28.442 done. 00:05:28.442 00:05:28.442 real 0m1.612s 00:05:28.442 user 0m4.376s 00:05:28.442 sys 0m0.114s 00:05:28.442 03:54:24 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.442 03:54:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.442 ************************************ 00:05:28.442 END TEST event_perf 00:05:28.442 ************************************ 00:05:28.442 03:54:24 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:28.442 03:54:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:28.442 03:54:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.442 03:54:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.442 ************************************ 00:05:28.442 START TEST event_reactor 00:05:28.442 ************************************ 00:05:28.442 03:54:24 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:28.442 [2024-11-18 03:54:24.942778] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:28.442 [2024-11-18 03:54:24.942887] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58124 ] 00:05:28.701 [2024-11-18 03:54:25.117292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.701 [2024-11-18 03:54:25.258551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.082 test_start 00:05:30.082 oneshot 00:05:30.082 tick 100 00:05:30.082 tick 100 00:05:30.082 tick 250 00:05:30.082 tick 100 00:05:30.082 tick 100 00:05:30.082 tick 250 00:05:30.082 tick 100 00:05:30.082 tick 500 00:05:30.082 tick 100 00:05:30.082 tick 100 00:05:30.082 tick 250 00:05:30.082 tick 100 00:05:30.082 tick 100 00:05:30.082 test_end 00:05:30.082 00:05:30.082 real 0m1.609s 00:05:30.082 user 0m1.389s 00:05:30.082 sys 0m0.110s 00:05:30.082 03:54:26 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.082 03:54:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:30.082 ************************************ 00:05:30.082 END TEST event_reactor 00:05:30.082 ************************************ 00:05:30.082 03:54:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.082 03:54:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:30.082 03:54:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.082 03:54:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.082 ************************************ 00:05:30.082 START TEST event_reactor_perf 00:05:30.082 ************************************ 00:05:30.082 03:54:26 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.082 [2024-11-18 03:54:26.617483] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:30.082 [2024-11-18 03:54:26.617587] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58166 ] 00:05:30.341 [2024-11-18 03:54:26.789650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.341 [2024-11-18 03:54:26.929456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.722 test_start 00:05:31.722 test_end 00:05:31.722 Performance: 394761 events per second 00:05:31.723 00:05:31.723 real 0m1.607s 00:05:31.723 user 0m1.377s 00:05:31.723 sys 0m0.120s 00:05:31.723 03:54:28 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.723 03:54:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.723 ************************************ 00:05:31.723 END TEST event_reactor_perf 00:05:31.723 ************************************ 00:05:31.723 03:54:28 event -- event/event.sh@49 -- # uname -s 00:05:31.723 03:54:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:31.723 03:54:28 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.723 03:54:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.723 03:54:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.723 03:54:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.723 ************************************ 00:05:31.723 START TEST event_scheduler 00:05:31.723 ************************************ 00:05:31.723 03:54:28 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.983 * Looking for test storage... 00:05:31.983 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.983 03:54:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.983 --rc genhtml_branch_coverage=1 00:05:31.983 --rc genhtml_function_coverage=1 00:05:31.983 --rc genhtml_legend=1 00:05:31.983 --rc geninfo_all_blocks=1 00:05:31.983 --rc geninfo_unexecuted_blocks=1 00:05:31.983 00:05:31.983 ' 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.983 --rc genhtml_branch_coverage=1 00:05:31.983 --rc genhtml_function_coverage=1 00:05:31.983 --rc genhtml_legend=1 00:05:31.983 --rc geninfo_all_blocks=1 00:05:31.983 --rc geninfo_unexecuted_blocks=1 00:05:31.983 00:05:31.983 ' 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.983 --rc genhtml_branch_coverage=1 00:05:31.983 --rc genhtml_function_coverage=1 00:05:31.983 --rc genhtml_legend=1 00:05:31.983 --rc geninfo_all_blocks=1 00:05:31.983 --rc geninfo_unexecuted_blocks=1 00:05:31.983 00:05:31.983 ' 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.983 --rc genhtml_branch_coverage=1 00:05:31.983 --rc genhtml_function_coverage=1 00:05:31.983 --rc genhtml_legend=1 00:05:31.983 --rc geninfo_all_blocks=1 00:05:31.983 --rc geninfo_unexecuted_blocks=1 00:05:31.983 00:05:31.983 ' 00:05:31.983 03:54:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:31.983 03:54:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58236 00:05:31.983 03:54:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:31.983 03:54:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.983 03:54:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58236 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58236 ']' 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.983 03:54:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.983 [2024-11-18 03:54:28.569313] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:31.983 [2024-11-18 03:54:28.569475] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58236 ] 00:05:32.242 [2024-11-18 03:54:28.750289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.242 [2024-11-18 03:54:28.867599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.243 [2024-11-18 03:54:28.867789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.243 [2024-11-18 03:54:28.868118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.243 [2024-11-18 03:54:28.868210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.812 03:54:29 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.812 03:54:29 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:32.812 03:54:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.812 03:54:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.812 03:54:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.812 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.812 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.812 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.812 POWER: Cannot set governor of lcore 0 to performance 00:05:32.812 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.812 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.812 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.812 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.812 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:32.812 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:32.812 POWER: Unable to set Power Management Environment for lcore 0 00:05:32.812 [2024-11-18 03:54:29.405074] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:32.812 [2024-11-18 03:54:29.405102] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:32.812 [2024-11-18 03:54:29.405121] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:32.812 [2024-11-18 03:54:29.405152] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:32.812 [2024-11-18 03:54:29.405174] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:32.812 [2024-11-18 03:54:29.405186] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:32.812 03:54:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.812 03:54:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.812 03:54:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.812 03:54:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 [2024-11-18 03:54:29.734793] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:33.402 03:54:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.402 03:54:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:33.402 03:54:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.402 03:54:29 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.402 03:54:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 ************************************ 00:05:33.402 START TEST scheduler_create_thread 00:05:33.402 ************************************ 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 2 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 3 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 4 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 5 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 6 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 7 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.402 8 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.402 03:54:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.403 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.403 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.403 9 00:05:33.403 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.403 03:54:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.403 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.403 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.403 10 00:05:33.403 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.403 03:54:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.403 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.403 03:54:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.780 03:54:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.780 03:54:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:34.780 03:54:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:34.780 03:54:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.780 03:54:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.718 03:54:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.718 03:54:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:35.718 03:54:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.718 03:54:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.287 03:54:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.287 03:54:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:36.287 03:54:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:36.287 03:54:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.287 03:54:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.227 ************************************ 00:05:37.227 END TEST scheduler_create_thread 00:05:37.227 ************************************ 00:05:37.227 03:54:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.227 00:05:37.227 real 0m3.884s 00:05:37.227 user 0m0.030s 00:05:37.227 sys 0m0.008s 00:05:37.227 03:54:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.227 03:54:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.227 03:54:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:37.227 03:54:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58236 00:05:37.227 03:54:33 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58236 ']' 00:05:37.227 03:54:33 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58236 00:05:37.227 03:54:33 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:37.227 03:54:33 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.227 03:54:33 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58236 00:05:37.227 killing process with pid 58236 00:05:37.227 03:54:33 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:37.227 03:54:33 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:37.227 03:54:33 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58236' 00:05:37.227 03:54:33 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58236 00:05:37.227 03:54:33 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58236 00:05:37.487 [2024-11-18 03:54:34.012293] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.869 00:05:38.869 real 0m6.875s 00:05:38.869 user 0m14.792s 00:05:38.869 sys 0m0.535s 00:05:38.869 03:54:35 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.869 03:54:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.869 ************************************ 00:05:38.869 END TEST event_scheduler 00:05:38.869 ************************************ 00:05:38.869 03:54:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.869 03:54:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.869 03:54:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.869 03:54:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.869 03:54:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.869 ************************************ 00:05:38.869 START TEST app_repeat 00:05:38.869 ************************************ 00:05:38.869 03:54:35 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58359 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.869 Process app_repeat pid: 58359 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58359' 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.869 spdk_app_start Round 0 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.869 03:54:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58359 /var/tmp/spdk-nbd.sock 00:05:38.869 03:54:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58359 ']' 00:05:38.869 03:54:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.869 03:54:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.869 03:54:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.869 03:54:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.869 03:54:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.869 [2024-11-18 03:54:35.266464] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:38.869 [2024-11-18 03:54:35.266580] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58359 ] 00:05:38.869 [2024-11-18 03:54:35.423437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.143 [2024-11-18 03:54:35.567228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.143 [2024-11-18 03:54:35.567267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.758 03:54:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.758 03:54:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:39.758 03:54:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.028 Malloc0 00:05:40.028 03:54:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.288 Malloc1 00:05:40.288 03:54:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.288 03:54:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.288 /dev/nbd0 00:05:40.549 03:54:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.549 03:54:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.549 1+0 records in 00:05:40.549 1+0 records out 00:05:40.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266366 s, 15.4 MB/s 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.549 03:54:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.549 03:54:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.549 03:54:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.549 03:54:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.549 /dev/nbd1 00:05:40.809 03:54:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.809 03:54:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.809 1+0 records in 00:05:40.809 1+0 records out 00:05:40.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548752 s, 7.5 MB/s 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.809 03:54:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.809 03:54:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.809 03:54:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.809 03:54:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.809 03:54:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.809 03:54:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.809 03:54:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.809 { 00:05:40.809 "nbd_device": "/dev/nbd0", 00:05:40.809 "bdev_name": "Malloc0" 00:05:40.809 }, 00:05:40.809 { 00:05:40.809 "nbd_device": "/dev/nbd1", 00:05:40.809 "bdev_name": "Malloc1" 00:05:40.809 } 00:05:40.809 ]' 00:05:40.809 03:54:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.809 03:54:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.809 { 00:05:40.809 "nbd_device": "/dev/nbd0", 00:05:40.809 "bdev_name": "Malloc0" 00:05:40.809 }, 00:05:40.809 { 00:05:40.809 "nbd_device": "/dev/nbd1", 00:05:40.809 "bdev_name": "Malloc1" 00:05:40.809 } 00:05:40.809 ]' 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.069 /dev/nbd1' 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.069 /dev/nbd1' 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.069 256+0 records in 00:05:41.069 256+0 records out 00:05:41.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131311 s, 79.9 MB/s 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.069 256+0 records in 00:05:41.069 256+0 records out 00:05:41.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264493 s, 39.6 MB/s 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.069 256+0 records in 00:05:41.069 256+0 records out 00:05:41.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261237 s, 40.1 MB/s 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.069 03:54:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.329 03:54:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.329 03:54:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.329 03:54:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.329 03:54:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.329 03:54:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.329 03:54:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.329 03:54:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.329 03:54:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.329 03:54:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.329 03:54:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.588 03:54:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.588 03:54:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.588 03:54:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.588 03:54:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.588 03:54:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.588 03:54:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.588 03:54:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.588 03:54:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.588 03:54:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.588 03:54:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.588 03:54:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.588 03:54:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.588 03:54:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.588 03:54:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.848 03:54:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.848 03:54:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.848 03:54:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.848 03:54:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.848 03:54:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.848 03:54:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.848 03:54:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.848 03:54:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.848 03:54:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.848 03:54:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.107 03:54:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.488 [2024-11-18 03:54:39.864377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.488 [2024-11-18 03:54:39.976507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.488 [2024-11-18 03:54:39.976510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.748 [2024-11-18 03:54:40.163993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.748 [2024-11-18 03:54:40.164114] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.129 spdk_app_start Round 1 00:05:45.129 03:54:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.130 03:54:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.130 03:54:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58359 /var/tmp/spdk-nbd.sock 00:05:45.130 03:54:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58359 ']' 00:05:45.130 03:54:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.130 03:54:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.130 03:54:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.130 03:54:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.130 03:54:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.389 03:54:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.389 03:54:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:45.389 03:54:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.649 Malloc0 00:05:45.649 03:54:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.909 Malloc1 00:05:45.909 03:54:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.909 03:54:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.170 /dev/nbd0 00:05:46.170 03:54:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.170 03:54:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.170 1+0 records in 00:05:46.170 1+0 records out 00:05:46.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330851 s, 12.4 MB/s 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.170 03:54:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.170 03:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.170 03:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.170 03:54:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.430 /dev/nbd1 00:05:46.430 03:54:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.430 03:54:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.430 1+0 records in 00:05:46.430 1+0 records out 00:05:46.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247564 s, 16.5 MB/s 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.430 03:54:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.430 03:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.430 03:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.430 03:54:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.430 03:54:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.431 03:54:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.701 { 00:05:46.701 "nbd_device": "/dev/nbd0", 00:05:46.701 "bdev_name": "Malloc0" 00:05:46.701 }, 00:05:46.701 { 00:05:46.701 "nbd_device": "/dev/nbd1", 00:05:46.701 "bdev_name": "Malloc1" 00:05:46.701 } 00:05:46.701 ]' 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.701 { 00:05:46.701 "nbd_device": "/dev/nbd0", 00:05:46.701 "bdev_name": "Malloc0" 00:05:46.701 }, 00:05:46.701 { 00:05:46.701 "nbd_device": "/dev/nbd1", 00:05:46.701 "bdev_name": "Malloc1" 00:05:46.701 } 00:05:46.701 ]' 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.701 /dev/nbd1' 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.701 /dev/nbd1' 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.701 256+0 records in 00:05:46.701 256+0 records out 00:05:46.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048811 s, 215 MB/s 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.701 256+0 records in 00:05:46.701 256+0 records out 00:05:46.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235615 s, 44.5 MB/s 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.701 256+0 records in 00:05:46.701 256+0 records out 00:05:46.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266836 s, 39.3 MB/s 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.701 03:54:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.702 03:54:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.976 03:54:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.976 03:54:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.976 03:54:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.976 03:54:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.976 03:54:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.976 03:54:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.976 03:54:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.976 03:54:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.976 03:54:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.976 03:54:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.236 03:54:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.236 03:54:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.236 03:54:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.236 03:54:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.236 03:54:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.236 03:54:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.236 03:54:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.236 03:54:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.236 03:54:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.236 03:54:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.236 03:54:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.496 03:54:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.496 03:54:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.066 03:54:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.006 [2024-11-18 03:54:45.635057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.266 [2024-11-18 03:54:45.773758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.266 [2024-11-18 03:54:45.773820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.526 [2024-11-18 03:54:46.002416] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.526 [2024-11-18 03:54:46.002497] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.907 spdk_app_start Round 2 00:05:50.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.907 03:54:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.907 03:54:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.907 03:54:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58359 /var/tmp/spdk-nbd.sock 00:05:50.907 03:54:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58359 ']' 00:05:50.907 03:54:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.907 03:54:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.907 03:54:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.907 03:54:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.907 03:54:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.166 03:54:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.166 03:54:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:51.166 03:54:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.428 Malloc0 00:05:51.428 03:54:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.688 Malloc1 00:05:51.689 03:54:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.689 03:54:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.948 /dev/nbd0 00:05:51.948 03:54:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.948 03:54:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.948 03:54:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:51.948 03:54:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.948 03:54:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.948 03:54:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.948 03:54:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:51.948 03:54:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.949 03:54:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.949 03:54:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.949 03:54:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.949 1+0 records in 00:05:51.949 1+0 records out 00:05:51.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461355 s, 8.9 MB/s 00:05:51.949 03:54:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.949 03:54:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.949 03:54:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.949 03:54:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.949 03:54:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.949 03:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.949 03:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.949 03:54:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.208 /dev/nbd1 00:05:52.208 03:54:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.208 03:54:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.208 1+0 records in 00:05:52.208 1+0 records out 00:05:52.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242968 s, 16.9 MB/s 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.208 03:54:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:52.208 03:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.208 03:54:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.208 03:54:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.208 03:54:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.208 03:54:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.208 03:54:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.208 { 00:05:52.208 "nbd_device": "/dev/nbd0", 00:05:52.208 "bdev_name": "Malloc0" 00:05:52.208 }, 00:05:52.208 { 00:05:52.208 "nbd_device": "/dev/nbd1", 00:05:52.208 "bdev_name": "Malloc1" 00:05:52.208 } 00:05:52.208 ]' 00:05:52.208 03:54:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.208 { 00:05:52.208 "nbd_device": "/dev/nbd0", 00:05:52.208 "bdev_name": "Malloc0" 00:05:52.208 }, 00:05:52.208 { 00:05:52.208 "nbd_device": "/dev/nbd1", 00:05:52.208 "bdev_name": "Malloc1" 00:05:52.208 } 00:05:52.208 ]' 00:05:52.208 03:54:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.468 /dev/nbd1' 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.468 /dev/nbd1' 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.468 256+0 records in 00:05:52.468 256+0 records out 00:05:52.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00576669 s, 182 MB/s 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.468 256+0 records in 00:05:52.468 256+0 records out 00:05:52.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251864 s, 41.6 MB/s 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.468 256+0 records in 00:05:52.468 256+0 records out 00:05:52.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264261 s, 39.7 MB/s 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.468 03:54:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.468 03:54:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.468 03:54:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.468 03:54:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.468 03:54:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.731 03:54:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.731 03:54:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.731 03:54:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.731 03:54:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.731 03:54:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.731 03:54:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.731 03:54:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.731 03:54:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.731 03:54:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.731 03:54:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.991 03:54:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.991 03:54:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.991 03:54:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.991 03:54:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.991 03:54:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.991 03:54:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.991 03:54:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.991 03:54:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.991 03:54:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.991 03:54:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.991 03:54:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.251 03:54:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.251 03:54:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.251 03:54:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.251 03:54:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.251 03:54:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.251 03:54:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.251 03:54:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.251 03:54:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.252 03:54:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.252 03:54:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.252 03:54:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.252 03:54:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.252 03:54:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.526 03:54:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.921 [2024-11-18 03:54:51.342813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.921 [2024-11-18 03:54:51.473467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.921 [2024-11-18 03:54:51.473468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.180 [2024-11-18 03:54:51.693009] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.180 [2024-11-18 03:54:51.693074] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.557 03:54:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58359 /var/tmp/spdk-nbd.sock 00:05:56.557 03:54:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58359 ']' 00:05:56.557 03:54:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.557 03:54:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.558 03:54:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.558 03:54:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.558 03:54:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.817 03:54:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.817 03:54:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:56.817 03:54:53 event.app_repeat -- event/event.sh@39 -- # killprocess 58359 00:05:56.817 03:54:53 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58359 ']' 00:05:56.817 03:54:53 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58359 00:05:56.817 03:54:53 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:56.818 03:54:53 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.818 03:54:53 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58359 00:05:56.818 03:54:53 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.818 03:54:53 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.818 03:54:53 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58359' 00:05:56.818 killing process with pid 58359 00:05:56.818 03:54:53 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58359 00:05:56.818 03:54:53 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58359 00:05:58.200 spdk_app_start is called in Round 0. 00:05:58.200 Shutdown signal received, stop current app iteration 00:05:58.200 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:58.200 spdk_app_start is called in Round 1. 00:05:58.200 Shutdown signal received, stop current app iteration 00:05:58.200 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:58.200 spdk_app_start is called in Round 2. 00:05:58.200 Shutdown signal received, stop current app iteration 00:05:58.200 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:58.200 spdk_app_start is called in Round 3. 00:05:58.200 Shutdown signal received, stop current app iteration 00:05:58.200 03:54:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:58.200 03:54:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:58.200 00:05:58.200 real 0m19.240s 00:05:58.200 user 0m40.769s 00:05:58.200 sys 0m2.888s 00:05:58.200 03:54:54 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.200 03:54:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.200 ************************************ 00:05:58.200 END TEST app_repeat 00:05:58.200 ************************************ 00:05:58.200 03:54:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:58.200 03:54:54 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:58.200 03:54:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.200 03:54:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.200 03:54:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.200 ************************************ 00:05:58.200 START TEST cpu_locks 00:05:58.200 ************************************ 00:05:58.200 03:54:54 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:58.200 * Looking for test storage... 00:05:58.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:58.200 03:54:54 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.200 03:54:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.200 03:54:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.200 03:54:54 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.200 03:54:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:58.200 03:54:54 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.200 03:54:54 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.200 --rc genhtml_branch_coverage=1 00:05:58.200 --rc genhtml_function_coverage=1 00:05:58.200 --rc genhtml_legend=1 00:05:58.200 --rc geninfo_all_blocks=1 00:05:58.200 --rc geninfo_unexecuted_blocks=1 00:05:58.200 00:05:58.200 ' 00:05:58.200 03:54:54 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.200 --rc genhtml_branch_coverage=1 00:05:58.200 --rc genhtml_function_coverage=1 00:05:58.200 --rc genhtml_legend=1 00:05:58.200 --rc geninfo_all_blocks=1 00:05:58.200 --rc geninfo_unexecuted_blocks=1 00:05:58.200 00:05:58.200 ' 00:05:58.200 03:54:54 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.201 --rc genhtml_branch_coverage=1 00:05:58.201 --rc genhtml_function_coverage=1 00:05:58.201 --rc genhtml_legend=1 00:05:58.201 --rc geninfo_all_blocks=1 00:05:58.201 --rc geninfo_unexecuted_blocks=1 00:05:58.201 00:05:58.201 ' 00:05:58.201 03:54:54 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.201 --rc genhtml_branch_coverage=1 00:05:58.201 --rc genhtml_function_coverage=1 00:05:58.201 --rc genhtml_legend=1 00:05:58.201 --rc geninfo_all_blocks=1 00:05:58.201 --rc geninfo_unexecuted_blocks=1 00:05:58.201 00:05:58.201 ' 00:05:58.201 03:54:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:58.201 03:54:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:58.201 03:54:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:58.201 03:54:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:58.201 03:54:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.201 03:54:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.201 03:54:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.201 ************************************ 00:05:58.201 START TEST default_locks 00:05:58.201 ************************************ 00:05:58.201 03:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:58.201 03:54:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58801 00:05:58.201 03:54:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.201 03:54:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58801 00:05:58.201 03:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58801 ']' 00:05:58.201 03:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.201 03:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.201 03:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.201 03:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.201 03:54:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.461 [2024-11-18 03:54:54.838874] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:58.461 [2024-11-18 03:54:54.838993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58801 ] 00:05:58.461 [2024-11-18 03:54:55.002413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.721 [2024-11-18 03:54:55.150519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.660 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.660 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:59.660 03:54:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58801 00:05:59.660 03:54:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58801 00:05:59.660 03:54:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.919 03:54:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58801 00:05:59.919 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58801 ']' 00:05:59.919 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58801 00:05:59.919 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:59.919 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.919 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58801 00:05:59.919 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.919 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.919 killing process with pid 58801 00:05:59.919 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58801' 00:05:59.919 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58801 00:05:59.919 03:54:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58801 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58801 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58801 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58801 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58801 ']' 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58801) - No such process 00:06:02.459 ERROR: process (pid: 58801) is no longer running 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.459 00:06:02.459 real 0m4.294s 00:06:02.459 user 0m4.023s 00:06:02.459 sys 0m0.800s 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.459 03:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.459 ************************************ 00:06:02.459 END TEST default_locks 00:06:02.459 ************************************ 00:06:02.459 03:54:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:02.459 03:54:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.459 03:54:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.459 03:54:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.459 ************************************ 00:06:02.459 START TEST default_locks_via_rpc 00:06:02.459 ************************************ 00:06:02.459 03:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:02.459 03:54:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58881 00:06:02.459 03:54:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.459 03:54:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58881 00:06:02.459 03:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58881 ']' 00:06:02.460 03:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.460 03:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.460 03:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.460 03:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.460 03:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.722 [2024-11-18 03:54:59.190301] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:02.722 [2024-11-18 03:54:59.190425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58881 ] 00:06:02.982 [2024-11-18 03:54:59.363935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.982 [2024-11-18 03:54:59.505124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58881 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58881 00:06:03.922 03:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.490 03:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58881 00:06:04.490 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58881 ']' 00:06:04.490 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58881 00:06:04.490 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:04.490 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.490 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58881 00:06:04.490 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.490 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.490 killing process with pid 58881 00:06:04.490 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58881' 00:06:04.490 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58881 00:06:04.490 03:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58881 00:06:07.032 00:06:07.032 real 0m4.435s 00:06:07.032 user 0m4.205s 00:06:07.032 sys 0m0.853s 00:06:07.032 03:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.032 03:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.032 ************************************ 00:06:07.032 END TEST default_locks_via_rpc 00:06:07.032 ************************************ 00:06:07.032 03:55:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:07.032 03:55:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.032 03:55:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.032 03:55:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.032 ************************************ 00:06:07.032 START TEST non_locking_app_on_locked_coremask 00:06:07.032 ************************************ 00:06:07.032 03:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:07.032 03:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58961 00:06:07.032 03:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.032 03:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58961 /var/tmp/spdk.sock 00:06:07.032 03:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58961 ']' 00:06:07.032 03:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.032 03:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.032 03:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.032 03:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.032 03:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.292 [2024-11-18 03:55:03.677434] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:07.292 [2024-11-18 03:55:03.677556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58961 ] 00:06:07.292 [2024-11-18 03:55:03.851391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.552 [2024-11-18 03:55:03.983779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.492 03:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.492 03:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.492 03:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:08.492 03:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58977 00:06:08.492 03:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58977 /var/tmp/spdk2.sock 00:06:08.492 03:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58977 ']' 00:06:08.492 03:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.492 03:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.492 03:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.492 03:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.492 03:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.492 [2024-11-18 03:55:05.044882] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:08.492 [2024-11-18 03:55:05.045404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:06:08.751 [2024-11-18 03:55:05.216602] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.751 [2024-11-18 03:55:05.216727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.011 [2024-11-18 03:55:05.498513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58961 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58961 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58961 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58961 ']' 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58961 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58961 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.560 killing process with pid 58961 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58961' 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58961 00:06:11.560 03:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58961 00:06:16.840 03:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58977 00:06:16.841 03:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58977 ']' 00:06:16.841 03:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58977 00:06:16.841 03:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.841 03:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.841 03:55:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58977 00:06:16.841 03:55:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.841 03:55:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.841 killing process with pid 58977 00:06:16.841 03:55:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58977' 00:06:16.841 03:55:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58977 00:06:16.841 03:55:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58977 00:06:19.382 00:06:19.382 real 0m11.953s 00:06:19.382 user 0m11.858s 00:06:19.382 sys 0m1.491s 00:06:19.382 03:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.382 03:55:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.382 ************************************ 00:06:19.382 END TEST non_locking_app_on_locked_coremask 00:06:19.382 ************************************ 00:06:19.382 03:55:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:19.382 03:55:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.382 03:55:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.382 03:55:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.382 ************************************ 00:06:19.382 START TEST locking_app_on_unlocked_coremask 00:06:19.382 ************************************ 00:06:19.382 03:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:19.382 03:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59128 00:06:19.382 03:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:19.382 03:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59128 /var/tmp/spdk.sock 00:06:19.382 03:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59128 ']' 00:06:19.382 03:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.382 03:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.382 03:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.382 03:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.382 03:55:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.382 [2024-11-18 03:55:15.697751] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:19.382 [2024-11-18 03:55:15.697914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59128 ] 00:06:19.382 [2024-11-18 03:55:15.872275] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.382 [2024-11-18 03:55:15.872331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.382 [2024-11-18 03:55:16.011358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.763 03:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.763 03:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.763 03:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59156 00:06:20.763 03:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59156 /var/tmp/spdk2.sock 00:06:20.763 03:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.763 03:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59156 ']' 00:06:20.763 03:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.763 03:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.763 03:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.763 03:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.764 03:55:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.764 [2024-11-18 03:55:17.110394] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:20.764 [2024-11-18 03:55:17.110547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59156 ] 00:06:20.764 [2024-11-18 03:55:17.277553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.024 [2024-11-18 03:55:17.554700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.564 03:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.564 03:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:23.564 03:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59156 00:06:23.564 03:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59156 00:06:23.564 03:55:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.824 03:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59128 00:06:23.824 03:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59128 ']' 00:06:23.824 03:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59128 00:06:23.824 03:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.824 03:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.824 03:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59128 00:06:23.824 03:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.824 03:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.824 killing process with pid 59128 00:06:23.824 03:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59128' 00:06:23.824 03:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59128 00:06:23.824 03:55:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59128 00:06:29.106 03:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59156 00:06:29.106 03:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59156 ']' 00:06:29.106 03:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59156 00:06:29.106 03:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:29.106 03:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.106 03:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59156 00:06:29.106 03:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.106 03:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.106 killing process with pid 59156 00:06:29.106 03:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59156' 00:06:29.106 03:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59156 00:06:29.106 03:55:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59156 00:06:31.645 00:06:31.645 real 0m12.572s 00:06:31.645 user 0m12.483s 00:06:31.645 sys 0m1.624s 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.645 ************************************ 00:06:31.645 END TEST locking_app_on_unlocked_coremask 00:06:31.645 ************************************ 00:06:31.645 03:55:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:31.645 03:55:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.645 03:55:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.645 03:55:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.645 ************************************ 00:06:31.645 START TEST locking_app_on_locked_coremask 00:06:31.645 ************************************ 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59307 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59307 /var/tmp/spdk.sock 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59307 ']' 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.645 03:55:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.025 [2024-11-18 03:55:28.332627] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:32.025 [2024-11-18 03:55:28.333239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:06:32.025 [2024-11-18 03:55:28.505325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.025 [2024-11-18 03:55:28.649612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.408 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.408 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.408 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59329 00:06:33.408 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.408 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59329 /var/tmp/spdk2.sock 00:06:33.408 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:33.408 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59329 /var/tmp/spdk2.sock 00:06:33.408 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:33.409 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.409 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:33.409 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.409 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59329 /var/tmp/spdk2.sock 00:06:33.409 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59329 ']' 00:06:33.409 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.409 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.409 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.409 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.409 03:55:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.409 [2024-11-18 03:55:29.737132] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:33.409 [2024-11-18 03:55:29.737255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59329 ] 00:06:33.409 [2024-11-18 03:55:29.907774] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59307 has claimed it. 00:06:33.409 [2024-11-18 03:55:29.907863] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:33.977 ERROR: process (pid: 59329) is no longer running 00:06:33.977 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59329) - No such process 00:06:33.977 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.977 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:33.977 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:33.977 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.977 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.977 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.977 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59307 00:06:33.977 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59307 00:06:33.977 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.236 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59307 00:06:34.236 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59307 ']' 00:06:34.236 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59307 00:06:34.236 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.236 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.236 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59307 00:06:34.236 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.236 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.236 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59307' 00:06:34.236 killing process with pid 59307 00:06:34.236 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59307 00:06:34.236 03:55:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59307 00:06:37.526 00:06:37.526 real 0m5.184s 00:06:37.526 user 0m5.138s 00:06:37.526 sys 0m0.984s 00:06:37.526 03:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.526 03:55:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.526 ************************************ 00:06:37.526 END TEST locking_app_on_locked_coremask 00:06:37.526 ************************************ 00:06:37.526 03:55:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.526 03:55:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.526 03:55:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.526 03:55:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.526 ************************************ 00:06:37.526 START TEST locking_overlapped_coremask 00:06:37.526 ************************************ 00:06:37.526 03:55:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:37.526 03:55:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59403 00:06:37.526 03:55:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.526 03:55:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59403 /var/tmp/spdk.sock 00:06:37.526 03:55:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59403 ']' 00:06:37.526 03:55:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.526 03:55:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.526 03:55:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.526 03:55:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.526 03:55:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.526 [2024-11-18 03:55:33.579016] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:37.526 [2024-11-18 03:55:33.579515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59403 ] 00:06:37.526 [2024-11-18 03:55:33.753761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.526 [2024-11-18 03:55:33.903422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.526 [2024-11-18 03:55:33.903583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.526 [2024-11-18 03:55:33.903623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59422 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59422 /var/tmp/spdk2.sock 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59422 /var/tmp/spdk2.sock 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59422 /var/tmp/spdk2.sock 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59422 ']' 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.466 03:55:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.466 [2024-11-18 03:55:35.064275] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:38.466 [2024-11-18 03:55:35.064713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59422 ] 00:06:38.725 [2024-11-18 03:55:35.234846] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59403 has claimed it. 00:06:38.725 [2024-11-18 03:55:35.234909] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.294 ERROR: process (pid: 59422) is no longer running 00:06:39.294 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59422) - No such process 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59403 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59403 ']' 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59403 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59403 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.294 killing process with pid 59403 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59403' 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59403 00:06:39.294 03:55:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59403 00:06:41.831 00:06:41.831 real 0m4.970s 00:06:41.831 user 0m13.294s 00:06:41.831 sys 0m0.770s 00:06:41.831 03:55:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.831 03:55:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.831 ************************************ 00:06:41.831 END TEST locking_overlapped_coremask 00:06:41.831 ************************************ 00:06:42.091 03:55:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:42.091 03:55:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.091 03:55:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.091 03:55:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.091 ************************************ 00:06:42.091 START TEST locking_overlapped_coremask_via_rpc 00:06:42.091 ************************************ 00:06:42.091 03:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:42.091 03:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59493 00:06:42.091 03:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:42.091 03:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59493 /var/tmp/spdk.sock 00:06:42.091 03:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59493 ']' 00:06:42.091 03:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.091 03:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.091 03:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.091 03:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.091 03:55:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.091 [2024-11-18 03:55:38.614149] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:42.091 [2024-11-18 03:55:38.614258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59493 ] 00:06:42.351 [2024-11-18 03:55:38.771128] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.351 [2024-11-18 03:55:38.771198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.351 [2024-11-18 03:55:38.919197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.351 [2024-11-18 03:55:38.919357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.351 [2024-11-18 03:55:38.919401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.749 03:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.749 03:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.749 03:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59514 00:06:43.749 03:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59514 /var/tmp/spdk2.sock 00:06:43.749 03:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:43.749 03:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59514 ']' 00:06:43.749 03:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.749 03:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.749 03:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.749 03:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.749 03:55:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.749 [2024-11-18 03:55:40.058370] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:43.749 [2024-11-18 03:55:40.058488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59514 ] 00:06:43.749 [2024-11-18 03:55:40.225812] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.749 [2024-11-18 03:55:40.225875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.008 [2024-11-18 03:55:40.466265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.008 [2024-11-18 03:55:40.466402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.008 [2024-11-18 03:55:40.466438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.542 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.543 [2024-11-18 03:55:42.611062] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59493 has claimed it. 00:06:46.543 request: 00:06:46.543 { 00:06:46.543 "method": "framework_enable_cpumask_locks", 00:06:46.543 "req_id": 1 00:06:46.543 } 00:06:46.543 Got JSON-RPC error response 00:06:46.543 response: 00:06:46.543 { 00:06:46.543 "code": -32603, 00:06:46.543 "message": "Failed to claim CPU core: 2" 00:06:46.543 } 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59493 /var/tmp/spdk.sock 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59493 ']' 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59514 /var/tmp/spdk2.sock 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59514 ']' 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.543 03:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.543 03:55:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.543 03:55:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.543 03:55:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:46.543 03:55:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.543 03:55:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.543 03:55:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.543 00:06:46.543 real 0m4.519s 00:06:46.543 user 0m1.229s 00:06:46.543 sys 0m0.210s 00:06:46.543 03:55:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.543 03:55:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.543 ************************************ 00:06:46.543 END TEST locking_overlapped_coremask_via_rpc 00:06:46.543 ************************************ 00:06:46.543 03:55:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:46.543 03:55:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59493 ]] 00:06:46.543 03:55:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59493 00:06:46.543 03:55:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59493 ']' 00:06:46.543 03:55:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59493 00:06:46.543 03:55:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:46.543 03:55:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.543 03:55:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59493 00:06:46.543 03:55:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.543 03:55:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.543 03:55:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59493' 00:06:46.543 killing process with pid 59493 00:06:46.543 03:55:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59493 00:06:46.543 03:55:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59493 00:06:49.084 03:55:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59514 ]] 00:06:49.084 03:55:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59514 00:06:49.084 03:55:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59514 ']' 00:06:49.084 03:55:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59514 00:06:49.084 03:55:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:49.084 03:55:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.084 03:55:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59514 00:06:49.343 03:55:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:49.343 03:55:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:49.343 03:55:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59514' 00:06:49.343 killing process with pid 59514 00:06:49.343 03:55:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59514 00:06:49.343 03:55:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59514 00:06:51.884 03:55:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.884 03:55:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:51.884 03:55:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59493 ]] 00:06:51.884 03:55:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59493 00:06:51.884 03:55:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59493 ']' 00:06:51.884 03:55:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59493 00:06:51.884 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59493) - No such process 00:06:51.884 Process with pid 59493 is not found 00:06:51.884 03:55:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59493 is not found' 00:06:51.884 03:55:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59514 ]] 00:06:51.884 03:55:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59514 00:06:51.884 03:55:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59514 ']' 00:06:51.884 03:55:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59514 00:06:51.884 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59514) - No such process 00:06:51.884 Process with pid 59514 is not found 00:06:51.884 03:55:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59514 is not found' 00:06:51.884 03:55:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.884 00:06:51.884 real 0m53.568s 00:06:51.884 user 1m28.926s 00:06:51.884 sys 0m8.083s 00:06:51.884 03:55:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.884 03:55:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.884 ************************************ 00:06:51.884 END TEST cpu_locks 00:06:51.884 ************************************ 00:06:51.884 00:06:51.884 real 1m25.152s 00:06:51.884 user 2m31.874s 00:06:51.884 sys 0m12.265s 00:06:51.884 03:55:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.884 03:55:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.884 ************************************ 00:06:51.884 END TEST event 00:06:51.884 ************************************ 00:06:51.884 03:55:48 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.884 03:55:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.884 03:55:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.884 03:55:48 -- common/autotest_common.sh@10 -- # set +x 00:06:51.884 ************************************ 00:06:51.884 START TEST thread 00:06:51.884 ************************************ 00:06:51.884 03:55:48 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.884 * Looking for test storage... 00:06:51.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:51.884 03:55:48 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.885 03:55:48 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.885 03:55:48 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.885 03:55:48 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.885 03:55:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.885 03:55:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.885 03:55:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.885 03:55:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.885 03:55:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.885 03:55:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.885 03:55:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.885 03:55:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.885 03:55:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.885 03:55:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.885 03:55:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.885 03:55:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:51.885 03:55:48 thread -- scripts/common.sh@345 -- # : 1 00:06:51.885 03:55:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.885 03:55:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.885 03:55:48 thread -- scripts/common.sh@365 -- # decimal 1 00:06:51.885 03:55:48 thread -- scripts/common.sh@353 -- # local d=1 00:06:51.885 03:55:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.885 03:55:48 thread -- scripts/common.sh@355 -- # echo 1 00:06:51.885 03:55:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.885 03:55:48 thread -- scripts/common.sh@366 -- # decimal 2 00:06:51.885 03:55:48 thread -- scripts/common.sh@353 -- # local d=2 00:06:51.885 03:55:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.885 03:55:48 thread -- scripts/common.sh@355 -- # echo 2 00:06:51.885 03:55:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.885 03:55:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.885 03:55:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.885 03:55:48 thread -- scripts/common.sh@368 -- # return 0 00:06:51.885 03:55:48 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.885 03:55:48 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.885 --rc genhtml_branch_coverage=1 00:06:51.885 --rc genhtml_function_coverage=1 00:06:51.885 --rc genhtml_legend=1 00:06:51.885 --rc geninfo_all_blocks=1 00:06:51.885 --rc geninfo_unexecuted_blocks=1 00:06:51.885 00:06:51.885 ' 00:06:51.885 03:55:48 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.885 --rc genhtml_branch_coverage=1 00:06:51.885 --rc genhtml_function_coverage=1 00:06:51.885 --rc genhtml_legend=1 00:06:51.885 --rc geninfo_all_blocks=1 00:06:51.885 --rc geninfo_unexecuted_blocks=1 00:06:51.885 00:06:51.885 ' 00:06:51.885 03:55:48 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.885 --rc genhtml_branch_coverage=1 00:06:51.885 --rc genhtml_function_coverage=1 00:06:51.885 --rc genhtml_legend=1 00:06:51.885 --rc geninfo_all_blocks=1 00:06:51.885 --rc geninfo_unexecuted_blocks=1 00:06:51.885 00:06:51.885 ' 00:06:51.885 03:55:48 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.885 --rc genhtml_branch_coverage=1 00:06:51.885 --rc genhtml_function_coverage=1 00:06:51.885 --rc genhtml_legend=1 00:06:51.885 --rc geninfo_all_blocks=1 00:06:51.885 --rc geninfo_unexecuted_blocks=1 00:06:51.885 00:06:51.885 ' 00:06:51.885 03:55:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.885 03:55:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:51.885 03:55:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.885 03:55:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 ************************************ 00:06:51.885 START TEST thread_poller_perf 00:06:51.885 ************************************ 00:06:51.885 03:55:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.885 [2024-11-18 03:55:48.470985] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:51.885 [2024-11-18 03:55:48.471109] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59709 ] 00:06:52.144 [2024-11-18 03:55:48.641847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.144 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:52.144 [2024-11-18 03:55:48.774089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.526 [2024-11-18T03:55:50.167Z] ====================================== 00:06:53.526 [2024-11-18T03:55:50.167Z] busy:2299433296 (cyc) 00:06:53.526 [2024-11-18T03:55:50.167Z] total_run_count: 415000 00:06:53.526 [2024-11-18T03:55:50.167Z] tsc_hz: 2290000000 (cyc) 00:06:53.526 [2024-11-18T03:55:50.167Z] ====================================== 00:06:53.526 [2024-11-18T03:55:50.167Z] poller_cost: 5540 (cyc), 2419 (nsec) 00:06:53.526 00:06:53.526 real 0m1.602s 00:06:53.526 user 0m1.385s 00:06:53.526 sys 0m0.110s 00:06:53.526 03:55:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.526 03:55:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.526 ************************************ 00:06:53.526 END TEST thread_poller_perf 00:06:53.526 ************************************ 00:06:53.526 03:55:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.526 03:55:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:53.526 03:55:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.526 03:55:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.526 ************************************ 00:06:53.526 START TEST thread_poller_perf 00:06:53.526 ************************************ 00:06:53.526 03:55:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.526 [2024-11-18 03:55:50.131861] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:53.526 [2024-11-18 03:55:50.131963] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59751 ] 00:06:53.786 [2024-11-18 03:55:50.300972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.045 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:54.045 [2024-11-18 03:55:50.433458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.427 [2024-11-18T03:55:52.068Z] ====================================== 00:06:55.427 [2024-11-18T03:55:52.068Z] busy:2293561266 (cyc) 00:06:55.427 [2024-11-18T03:55:52.068Z] total_run_count: 5222000 00:06:55.427 [2024-11-18T03:55:52.068Z] tsc_hz: 2290000000 (cyc) 00:06:55.427 [2024-11-18T03:55:52.068Z] ====================================== 00:06:55.427 [2024-11-18T03:55:52.068Z] poller_cost: 439 (cyc), 191 (nsec) 00:06:55.427 00:06:55.427 real 0m1.598s 00:06:55.427 user 0m1.387s 00:06:55.427 sys 0m0.103s 00:06:55.427 03:55:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.427 03:55:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.427 ************************************ 00:06:55.427 END TEST thread_poller_perf 00:06:55.427 ************************************ 00:06:55.427 03:55:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.427 00:06:55.427 real 0m3.547s 00:06:55.427 user 0m2.943s 00:06:55.427 sys 0m0.413s 00:06:55.427 03:55:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.427 03:55:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.427 ************************************ 00:06:55.427 END TEST thread 00:06:55.427 ************************************ 00:06:55.427 03:55:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:55.427 03:55:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.427 03:55:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.427 03:55:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.427 03:55:51 -- common/autotest_common.sh@10 -- # set +x 00:06:55.427 ************************************ 00:06:55.427 START TEST app_cmdline 00:06:55.427 ************************************ 00:06:55.427 03:55:51 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.427 * Looking for test storage... 00:06:55.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.427 03:55:51 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.427 03:55:51 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.427 03:55:51 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.427 03:55:51 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.427 03:55:52 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:55.427 03:55:52 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.427 03:55:52 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.427 --rc genhtml_branch_coverage=1 00:06:55.427 --rc genhtml_function_coverage=1 00:06:55.427 --rc genhtml_legend=1 00:06:55.427 --rc geninfo_all_blocks=1 00:06:55.427 --rc geninfo_unexecuted_blocks=1 00:06:55.427 00:06:55.427 ' 00:06:55.427 03:55:52 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.427 --rc genhtml_branch_coverage=1 00:06:55.427 --rc genhtml_function_coverage=1 00:06:55.427 --rc genhtml_legend=1 00:06:55.427 --rc geninfo_all_blocks=1 00:06:55.427 --rc geninfo_unexecuted_blocks=1 00:06:55.427 00:06:55.427 ' 00:06:55.427 03:55:52 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.427 --rc genhtml_branch_coverage=1 00:06:55.427 --rc genhtml_function_coverage=1 00:06:55.427 --rc genhtml_legend=1 00:06:55.427 --rc geninfo_all_blocks=1 00:06:55.427 --rc geninfo_unexecuted_blocks=1 00:06:55.427 00:06:55.427 ' 00:06:55.427 03:55:52 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.427 --rc genhtml_branch_coverage=1 00:06:55.427 --rc genhtml_function_coverage=1 00:06:55.427 --rc genhtml_legend=1 00:06:55.427 --rc geninfo_all_blocks=1 00:06:55.427 --rc geninfo_unexecuted_blocks=1 00:06:55.427 00:06:55.427 ' 00:06:55.427 03:55:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:55.427 03:55:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59839 00:06:55.427 03:55:52 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:55.427 03:55:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59839 00:06:55.427 03:55:52 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59839 ']' 00:06:55.427 03:55:52 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.427 03:55:52 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.427 03:55:52 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.427 03:55:52 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.427 03:55:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.687 [2024-11-18 03:55:52.113751] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:55.687 [2024-11-18 03:55:52.113888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59839 ] 00:06:55.687 [2024-11-18 03:55:52.290799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.946 [2024-11-18 03:55:52.433525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.884 03:55:53 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.884 03:55:53 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:56.885 03:55:53 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:57.143 { 00:06:57.143 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:06:57.143 "fields": { 00:06:57.143 "major": 25, 00:06:57.143 "minor": 1, 00:06:57.143 "patch": 0, 00:06:57.143 "suffix": "-pre", 00:06:57.143 "commit": "83e8405e4" 00:06:57.143 } 00:06:57.143 } 00:06:57.143 03:55:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:57.143 03:55:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:57.143 03:55:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:57.143 03:55:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:57.143 03:55:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:57.143 03:55:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:57.143 03:55:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:57.143 03:55:53 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.143 03:55:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.144 03:55:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:57.144 03:55:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:57.144 03:55:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:57.144 03:55:53 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.403 request: 00:06:57.403 { 00:06:57.403 "method": "env_dpdk_get_mem_stats", 00:06:57.403 "req_id": 1 00:06:57.403 } 00:06:57.403 Got JSON-RPC error response 00:06:57.403 response: 00:06:57.403 { 00:06:57.403 "code": -32601, 00:06:57.403 "message": "Method not found" 00:06:57.403 } 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.403 03:55:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59839 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59839 ']' 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59839 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59839 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.403 killing process with pid 59839 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59839' 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 59839 00:06:57.403 03:55:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 59839 00:06:59.957 00:06:59.957 real 0m4.670s 00:06:59.957 user 0m4.708s 00:06:59.957 sys 0m0.764s 00:06:59.957 03:55:56 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.957 03:55:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.957 ************************************ 00:06:59.957 END TEST app_cmdline 00:06:59.957 ************************************ 00:06:59.957 03:55:56 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.957 03:55:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.957 03:55:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.957 03:55:56 -- common/autotest_common.sh@10 -- # set +x 00:06:59.957 ************************************ 00:06:59.957 START TEST version 00:06:59.957 ************************************ 00:06:59.957 03:55:56 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.216 * Looking for test storage... 00:07:00.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:00.216 03:55:56 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.216 03:55:56 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.216 03:55:56 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.216 03:55:56 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.216 03:55:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.216 03:55:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.216 03:55:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.216 03:55:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.216 03:55:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.216 03:55:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.216 03:55:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.216 03:55:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.216 03:55:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.216 03:55:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.216 03:55:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.216 03:55:56 version -- scripts/common.sh@344 -- # case "$op" in 00:07:00.216 03:55:56 version -- scripts/common.sh@345 -- # : 1 00:07:00.216 03:55:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.216 03:55:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.216 03:55:56 version -- scripts/common.sh@365 -- # decimal 1 00:07:00.216 03:55:56 version -- scripts/common.sh@353 -- # local d=1 00:07:00.216 03:55:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.216 03:55:56 version -- scripts/common.sh@355 -- # echo 1 00:07:00.216 03:55:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.216 03:55:56 version -- scripts/common.sh@366 -- # decimal 2 00:07:00.216 03:55:56 version -- scripts/common.sh@353 -- # local d=2 00:07:00.216 03:55:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.216 03:55:56 version -- scripts/common.sh@355 -- # echo 2 00:07:00.216 03:55:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.216 03:55:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.216 03:55:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.216 03:55:56 version -- scripts/common.sh@368 -- # return 0 00:07:00.216 03:55:56 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.216 03:55:56 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.216 --rc genhtml_branch_coverage=1 00:07:00.216 --rc genhtml_function_coverage=1 00:07:00.216 --rc genhtml_legend=1 00:07:00.216 --rc geninfo_all_blocks=1 00:07:00.216 --rc geninfo_unexecuted_blocks=1 00:07:00.216 00:07:00.216 ' 00:07:00.216 03:55:56 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.216 --rc genhtml_branch_coverage=1 00:07:00.216 --rc genhtml_function_coverage=1 00:07:00.216 --rc genhtml_legend=1 00:07:00.216 --rc geninfo_all_blocks=1 00:07:00.216 --rc geninfo_unexecuted_blocks=1 00:07:00.216 00:07:00.216 ' 00:07:00.216 03:55:56 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.216 --rc genhtml_branch_coverage=1 00:07:00.216 --rc genhtml_function_coverage=1 00:07:00.216 --rc genhtml_legend=1 00:07:00.216 --rc geninfo_all_blocks=1 00:07:00.216 --rc geninfo_unexecuted_blocks=1 00:07:00.216 00:07:00.216 ' 00:07:00.216 03:55:56 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.216 --rc genhtml_branch_coverage=1 00:07:00.216 --rc genhtml_function_coverage=1 00:07:00.216 --rc genhtml_legend=1 00:07:00.216 --rc geninfo_all_blocks=1 00:07:00.216 --rc geninfo_unexecuted_blocks=1 00:07:00.216 00:07:00.216 ' 00:07:00.216 03:55:56 version -- app/version.sh@17 -- # get_header_version major 00:07:00.216 03:55:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.216 03:55:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.216 03:55:56 version -- app/version.sh@14 -- # cut -f2 00:07:00.216 03:55:56 version -- app/version.sh@17 -- # major=25 00:07:00.216 03:55:56 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.216 03:55:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.216 03:55:56 version -- app/version.sh@14 -- # cut -f2 00:07:00.216 03:55:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.216 03:55:56 version -- app/version.sh@18 -- # minor=1 00:07:00.216 03:55:56 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.216 03:55:56 version -- app/version.sh@14 -- # cut -f2 00:07:00.216 03:55:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.216 03:55:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.216 03:55:56 version -- app/version.sh@19 -- # patch=0 00:07:00.216 03:55:56 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.216 03:55:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.216 03:55:56 version -- app/version.sh@14 -- # cut -f2 00:07:00.216 03:55:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.216 03:55:56 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.216 03:55:56 version -- app/version.sh@22 -- # version=25.1 00:07:00.216 03:55:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.216 03:55:56 version -- app/version.sh@28 -- # version=25.1rc0 00:07:00.216 03:55:56 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:00.216 03:55:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:00.216 03:55:56 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:00.216 03:55:56 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:00.216 00:07:00.216 real 0m0.307s 00:07:00.216 user 0m0.193s 00:07:00.216 sys 0m0.169s 00:07:00.216 03:55:56 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.216 03:55:56 version -- common/autotest_common.sh@10 -- # set +x 00:07:00.216 ************************************ 00:07:00.216 END TEST version 00:07:00.216 ************************************ 00:07:00.476 03:55:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:00.476 03:55:56 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:00.476 03:55:56 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:00.476 03:55:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.476 03:55:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.476 03:55:56 -- common/autotest_common.sh@10 -- # set +x 00:07:00.476 ************************************ 00:07:00.476 START TEST bdev_raid 00:07:00.476 ************************************ 00:07:00.476 03:55:56 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:00.476 * Looking for test storage... 00:07:00.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:00.476 03:55:57 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.476 03:55:57 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.476 03:55:57 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.476 03:55:57 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.476 03:55:57 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:00.736 03:55:57 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.736 03:55:57 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:00.736 03:55:57 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:00.736 03:55:57 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.736 03:55:57 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:00.736 03:55:57 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.736 03:55:57 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.736 03:55:57 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.736 03:55:57 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:00.736 03:55:57 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.736 03:55:57 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.736 --rc genhtml_branch_coverage=1 00:07:00.736 --rc genhtml_function_coverage=1 00:07:00.736 --rc genhtml_legend=1 00:07:00.736 --rc geninfo_all_blocks=1 00:07:00.736 --rc geninfo_unexecuted_blocks=1 00:07:00.736 00:07:00.736 ' 00:07:00.736 03:55:57 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.736 --rc genhtml_branch_coverage=1 00:07:00.736 --rc genhtml_function_coverage=1 00:07:00.736 --rc genhtml_legend=1 00:07:00.736 --rc geninfo_all_blocks=1 00:07:00.736 --rc geninfo_unexecuted_blocks=1 00:07:00.736 00:07:00.736 ' 00:07:00.736 03:55:57 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.736 --rc genhtml_branch_coverage=1 00:07:00.736 --rc genhtml_function_coverage=1 00:07:00.736 --rc genhtml_legend=1 00:07:00.736 --rc geninfo_all_blocks=1 00:07:00.736 --rc geninfo_unexecuted_blocks=1 00:07:00.736 00:07:00.736 ' 00:07:00.736 03:55:57 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.736 --rc genhtml_branch_coverage=1 00:07:00.736 --rc genhtml_function_coverage=1 00:07:00.736 --rc genhtml_legend=1 00:07:00.736 --rc geninfo_all_blocks=1 00:07:00.736 --rc geninfo_unexecuted_blocks=1 00:07:00.736 00:07:00.736 ' 00:07:00.736 03:55:57 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.736 03:55:57 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.736 03:55:57 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:00.736 03:55:57 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:00.736 03:55:57 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:00.736 03:55:57 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:00.736 03:55:57 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:00.736 03:55:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.737 03:55:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.737 03:55:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.737 ************************************ 00:07:00.737 START TEST raid1_resize_data_offset_test 00:07:00.737 ************************************ 00:07:00.737 03:55:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:00.737 03:55:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60031 00:07:00.737 Process raid pid: 60031 00:07:00.737 03:55:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60031' 00:07:00.737 03:55:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60031 00:07:00.737 03:55:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.737 03:55:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60031 ']' 00:07:00.737 03:55:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.737 03:55:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.737 03:55:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.737 03:55:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.737 03:55:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.737 [2024-11-18 03:55:57.248377] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:00.737 [2024-11-18 03:55:57.248503] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.996 [2024-11-18 03:55:57.430482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.996 [2024-11-18 03:55:57.571819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.255 [2024-11-18 03:55:57.804128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.255 [2024-11-18 03:55:57.804199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.514 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.514 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:01.514 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:01.514 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.514 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.514 malloc0 00:07:01.514 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.514 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:01.514 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.514 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.774 malloc1 00:07:01.774 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.774 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:01.774 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.774 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.774 null0 00:07:01.774 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.775 [2024-11-18 03:55:58.252904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:01.775 [2024-11-18 03:55:58.254934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:01.775 [2024-11-18 03:55:58.254988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:01.775 [2024-11-18 03:55:58.255162] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:01.775 [2024-11-18 03:55:58.255194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:01.775 [2024-11-18 03:55:58.255465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:01.775 [2024-11-18 03:55:58.255630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:01.775 [2024-11-18 03:55:58.255648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:01.775 [2024-11-18 03:55:58.255798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.775 [2024-11-18 03:55:58.308792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.775 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.344 malloc2 00:07:02.344 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.344 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:02.344 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.344 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.344 [2024-11-18 03:55:58.920856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:02.344 [2024-11-18 03:55:58.938945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:02.344 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.344 [2024-11-18 03:55:58.940985] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:02.344 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.344 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:02.344 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.344 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.344 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.604 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:02.604 03:55:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60031 00:07:02.604 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60031 ']' 00:07:02.604 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60031 00:07:02.604 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:02.604 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.604 03:55:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60031 00:07:02.604 03:55:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.604 03:55:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.604 killing process with pid 60031 00:07:02.604 03:55:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60031' 00:07:02.604 03:55:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60031 00:07:02.604 03:55:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60031 00:07:02.604 [2024-11-18 03:55:59.021527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.604 [2024-11-18 03:55:59.021854] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:02.604 [2024-11-18 03:55:59.021914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.604 [2024-11-18 03:55:59.021932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:02.604 [2024-11-18 03:55:59.058594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.604 [2024-11-18 03:55:59.058961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.604 [2024-11-18 03:55:59.058994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:04.511 [2024-11-18 03:56:00.955539] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.892 03:56:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:05.892 00:07:05.892 real 0m4.979s 00:07:05.892 user 0m4.669s 00:07:05.892 sys 0m0.718s 00:07:05.892 03:56:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.892 03:56:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.892 ************************************ 00:07:05.892 END TEST raid1_resize_data_offset_test 00:07:05.892 ************************************ 00:07:05.892 03:56:02 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:05.892 03:56:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.892 03:56:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.892 03:56:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.892 ************************************ 00:07:05.892 START TEST raid0_resize_superblock_test 00:07:05.892 ************************************ 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60120 00:07:05.892 Process raid pid: 60120 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60120' 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60120 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60120 ']' 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.892 03:56:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:05.892 [2024-11-18 03:56:02.274804] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:05.892 [2024-11-18 03:56:02.274952] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.892 [2024-11-18 03:56:02.452078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.151 [2024-11-18 03:56:02.588365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.410 [2024-11-18 03:56:02.828689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.410 [2024-11-18 03:56:02.828737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.669 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.669 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:06.669 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:06.669 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.669 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.239 malloc0 00:07:07.239 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.239 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:07.239 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.239 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.239 [2024-11-18 03:56:03.720191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:07.239 [2024-11-18 03:56:03.720280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.239 [2024-11-18 03:56:03.720309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:07.239 [2024-11-18 03:56:03.720322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.239 [2024-11-18 03:56:03.722783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.239 [2024-11-18 03:56:03.722834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:07.239 pt0 00:07:07.239 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.239 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:07.239 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.239 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.498 ffc0b6c6-7eb4-4db7-a08b-5f3d0109a1da 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.498 1296c1d0-112b-4f1d-ad46-6d95b45ee982 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.498 6cee2368-492a-47e2-aee0-a19b0c79ccae 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.498 [2024-11-18 03:56:03.922776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1296c1d0-112b-4f1d-ad46-6d95b45ee982 is claimed 00:07:07.498 [2024-11-18 03:56:03.922915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6cee2368-492a-47e2-aee0-a19b0c79ccae is claimed 00:07:07.498 [2024-11-18 03:56:03.923060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:07.498 [2024-11-18 03:56:03.923090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:07.498 [2024-11-18 03:56:03.923351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:07.498 [2024-11-18 03:56:03.923548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:07.498 [2024-11-18 03:56:03.923565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:07.498 [2024-11-18 03:56:03.923720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:07.498 03:56:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.498 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:07.498 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.498 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.499 [2024-11-18 03:56:04.030918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.499 [2024-11-18 03:56:04.074785] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.499 [2024-11-18 03:56:04.074833] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1296c1d0-112b-4f1d-ad46-6d95b45ee982' was resized: old size 131072, new size 204800 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.499 [2024-11-18 03:56:04.082744] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.499 [2024-11-18 03:56:04.082780] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '6cee2368-492a-47e2-aee0-a19b0c79ccae' was resized: old size 131072, new size 204800 00:07:07.499 [2024-11-18 03:56:04.082812] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.499 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.759 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.759 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:07.759 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.759 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.759 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.759 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.759 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.759 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:07.759 [2024-11-18 03:56:04.174571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.759 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.759 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.760 [2024-11-18 03:56:04.218317] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:07.760 [2024-11-18 03:56:04.218403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:07.760 [2024-11-18 03:56:04.218417] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.760 [2024-11-18 03:56:04.218438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:07.760 [2024-11-18 03:56:04.218573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.760 [2024-11-18 03:56:04.218615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.760 [2024-11-18 03:56:04.218628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.760 [2024-11-18 03:56:04.226206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:07.760 [2024-11-18 03:56:04.226269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.760 [2024-11-18 03:56:04.226294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:07.760 [2024-11-18 03:56:04.226307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.760 [2024-11-18 03:56:04.228874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.760 [2024-11-18 03:56:04.228912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:07.760 [2024-11-18 03:56:04.230646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1296c1d0-112b-4f1d-ad46-6d95b45ee982 00:07:07.760 [2024-11-18 03:56:04.230725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1296c1d0-112b-4f1d-ad46-6d95b45ee982 is claimed 00:07:07.760 [2024-11-18 03:56:04.230859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 6cee2368-492a-47e2-aee0-a19b0c79ccae 00:07:07.760 [2024-11-18 03:56:04.230891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6cee2368-492a-47e2-aee0-a19b0c79ccae is claimed 00:07:07.760 [2024-11-18 03:56:04.231074] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 6cee2368-492a-47e2-aee0-a19b0c79ccae (2) smaller than existing raid bdev Raid (3) 00:07:07.760 [2024-11-18 03:56:04.231117] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 1296c1d0-112b-4f1d-ad46-6d95b45ee982: File exists 00:07:07.760 [2024-11-18 03:56:04.231156] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:07.760 [2024-11-18 03:56:04.231169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:07.760 pt0 00:07:07.760 [2024-11-18 03:56:04.231429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.760 [2024-11-18 03:56:04.231587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:07.760 [2024-11-18 03:56:04.231600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.760 [2024-11-18 03:56:04.231760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.760 [2024-11-18 03:56:04.246932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60120 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60120 ']' 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60120 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60120 00:07:07.760 killing process with pid 60120 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60120' 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60120 00:07:07.760 03:56:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60120 00:07:07.760 [2024-11-18 03:56:04.315471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.760 [2024-11-18 03:56:04.315579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.760 [2024-11-18 03:56:04.315641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.760 [2024-11-18 03:56:04.315652] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:09.667 [2024-11-18 03:56:05.849588] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.604 03:56:07 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:10.604 00:07:10.604 real 0m4.839s 00:07:10.604 user 0m4.845s 00:07:10.604 sys 0m0.716s 00:07:10.604 03:56:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.604 03:56:07 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.604 ************************************ 00:07:10.604 END TEST raid0_resize_superblock_test 00:07:10.604 ************************************ 00:07:10.604 03:56:07 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:10.604 03:56:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.604 03:56:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.604 03:56:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.604 ************************************ 00:07:10.604 START TEST raid1_resize_superblock_test 00:07:10.604 ************************************ 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60218 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60218' 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:10.604 Process raid pid: 60218 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60218 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60218 ']' 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.604 03:56:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.604 [2024-11-18 03:56:07.177021] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:10.604 [2024-11-18 03:56:07.177129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.863 [2024-11-18 03:56:07.334760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.863 [2024-11-18 03:56:07.470602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.122 [2024-11-18 03:56:07.711195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.122 [2024-11-18 03:56:07.711244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.380 03:56:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.380 03:56:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:11.380 03:56:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:11.380 03:56:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.380 03:56:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 malloc0 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 [2024-11-18 03:56:08.630894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:12.320 [2024-11-18 03:56:08.630974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.320 [2024-11-18 03:56:08.631001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:12.320 [2024-11-18 03:56:08.631014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.320 [2024-11-18 03:56:08.633362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.320 [2024-11-18 03:56:08.633403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:12.320 pt0 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 646f33a0-6430-4941-9735-66985b69e6c0 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 8902e500-01e7-435d-8735-a7ca610952de 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 7e2d5884-52fb-45a9-b7c6-c532f7064d04 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 [2024-11-18 03:56:08.839609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8902e500-01e7-435d-8735-a7ca610952de is claimed 00:07:12.320 [2024-11-18 03:56:08.839730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7e2d5884-52fb-45a9-b7c6-c532f7064d04 is claimed 00:07:12.320 [2024-11-18 03:56:08.839882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:12.320 [2024-11-18 03:56:08.839900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:12.320 [2024-11-18 03:56:08.840187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:12.320 [2024-11-18 03:56:08.840410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:12.320 [2024-11-18 03:56:08.840428] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:12.320 [2024-11-18 03:56:08.840588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:12.320 [2024-11-18 03:56:08.947646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.320 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.581 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.581 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.581 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:12.581 03:56:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:12.581 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.581 03:56:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.581 [2024-11-18 03:56:08.999623] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.581 [2024-11-18 03:56:08.999664] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8902e500-01e7-435d-8735-a7ca610952de' was resized: old size 131072, new size 204800 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.581 [2024-11-18 03:56:09.015420] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:12.581 [2024-11-18 03:56:09.015450] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7e2d5884-52fb-45a9-b7c6-c532f7064d04' was resized: old size 131072, new size 204800 00:07:12.581 [2024-11-18 03:56:09.015476] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:12.581 [2024-11-18 03:56:09.111438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.581 [2024-11-18 03:56:09.159125] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:12.581 [2024-11-18 03:56:09.159220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:12.581 [2024-11-18 03:56:09.159255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:12.581 [2024-11-18 03:56:09.159436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.581 [2024-11-18 03:56:09.159690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.581 [2024-11-18 03:56:09.159767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.581 [2024-11-18 03:56:09.159783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.581 [2024-11-18 03:56:09.170962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:12.581 [2024-11-18 03:56:09.171019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.581 [2024-11-18 03:56:09.171056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:12.581 [2024-11-18 03:56:09.171073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.581 [2024-11-18 03:56:09.173616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.581 [2024-11-18 03:56:09.173654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:12.581 [2024-11-18 03:56:09.175454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8902e500-01e7-435d-8735-a7ca610952de 00:07:12.581 [2024-11-18 03:56:09.175548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8902e500-01e7-435d-8735-a7ca610952de is claimed 00:07:12.581 [2024-11-18 03:56:09.175681] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7e2d5884-52fb-45a9-b7c6-c532f7064d04 00:07:12.581 [2024-11-18 03:56:09.175702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7e2d5884-52fb-45a9-b7c6-c532f7064d04 is claimed 00:07:12.581 [2024-11-18 03:56:09.175873] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 7e2d5884-52fb-45a9-b7c6-c532f7064d04 (2) smaller than existing raid bdev Raid (3) 00:07:12.581 [2024-11-18 03:56:09.175904] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 8902e500-01e7-435d-8735-a7ca610952de: File exists 00:07:12.581 [2024-11-18 03:56:09.175937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:12.581 [2024-11-18 03:56:09.175950] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:12.581 pt0 00:07:12.581 [2024-11-18 03:56:09.176216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:12.581 [2024-11-18 03:56:09.176387] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:12.581 [2024-11-18 03:56:09.176410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:12.581 [2024-11-18 03:56:09.176555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:12.581 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.582 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.582 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.582 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:12.582 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:12.582 [2024-11-18 03:56:09.195711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:12.582 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60218 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60218 ']' 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60218 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60218 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60218' 00:07:12.841 killing process with pid 60218 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60218 00:07:12.841 [2024-11-18 03:56:09.274538] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.841 [2024-11-18 03:56:09.274680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.841 [2024-11-18 03:56:09.274735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.841 [2024-11-18 03:56:09.274745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:12.841 03:56:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60218 00:07:14.222 [2024-11-18 03:56:10.791958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.601 03:56:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:15.601 00:07:15.601 real 0m4.868s 00:07:15.601 user 0m4.866s 00:07:15.601 sys 0m0.723s 00:07:15.601 03:56:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.601 03:56:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.601 ************************************ 00:07:15.601 END TEST raid1_resize_superblock_test 00:07:15.601 ************************************ 00:07:15.601 03:56:12 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:15.601 03:56:12 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:15.601 03:56:12 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:15.601 03:56:12 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:15.601 03:56:12 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:15.601 03:56:12 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:15.601 03:56:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.601 03:56:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.601 03:56:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.601 ************************************ 00:07:15.601 START TEST raid_function_test_raid0 00:07:15.601 ************************************ 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60321 00:07:15.601 Process raid pid: 60321 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60321' 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60321 00:07:15.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60321 ']' 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.601 03:56:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:15.601 [2024-11-18 03:56:12.142875] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:15.601 [2024-11-18 03:56:12.143074] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.861 [2024-11-18 03:56:12.315521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.861 [2024-11-18 03:56:12.456257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.121 [2024-11-18 03:56:12.692082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.121 [2024-11-18 03:56:12.692210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.381 03:56:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.381 03:56:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:16.381 03:56:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:16.381 03:56:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.381 03:56:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:16.381 Base_1 00:07:16.381 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.381 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:16.381 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.381 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:16.653 Base_2 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:16.653 [2024-11-18 03:56:13.066377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:16.653 [2024-11-18 03:56:13.068584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:16.653 [2024-11-18 03:56:13.068653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:16.653 [2024-11-18 03:56:13.068666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:16.653 [2024-11-18 03:56:13.068942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:16.653 [2024-11-18 03:56:13.069090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:16.653 [2024-11-18 03:56:13.069099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:16.653 [2024-11-18 03:56:13.069249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:16.653 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:16.942 [2024-11-18 03:56:13.310115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:16.942 /dev/nbd0 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.942 1+0 records in 00:07:16.942 1+0 records out 00:07:16.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325616 s, 12.6 MB/s 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:16.942 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.220 { 00:07:17.220 "nbd_device": "/dev/nbd0", 00:07:17.220 "bdev_name": "raid" 00:07:17.220 } 00:07:17.220 ]' 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.220 { 00:07:17.220 "nbd_device": "/dev/nbd0", 00:07:17.220 "bdev_name": "raid" 00:07:17.220 } 00:07:17.220 ]' 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:17.220 4096+0 records in 00:07:17.220 4096+0 records out 00:07:17.220 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0275943 s, 76.0 MB/s 00:07:17.220 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:17.479 4096+0 records in 00:07:17.479 4096+0 records out 00:07:17.479 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.231906 s, 9.0 MB/s 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:17.479 128+0 records in 00:07:17.479 128+0 records out 00:07:17.479 65536 bytes (66 kB, 64 KiB) copied, 0.00118131 s, 55.5 MB/s 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:17.479 03:56:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:17.479 2035+0 records in 00:07:17.480 2035+0 records out 00:07:17.480 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0141983 s, 73.4 MB/s 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:17.480 456+0 records in 00:07:17.480 456+0 records out 00:07:17.480 233472 bytes (233 kB, 228 KiB) copied, 0.00270548 s, 86.3 MB/s 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.480 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:17.740 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.740 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.740 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.740 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.740 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.740 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.740 [2024-11-18 03:56:14.261987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.740 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:17.740 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.740 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:17.740 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:17.740 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60321 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60321 ']' 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60321 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60321 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.999 killing process with pid 60321 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60321' 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60321 00:07:17.999 [2024-11-18 03:56:14.567188] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.999 03:56:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60321 00:07:17.999 [2024-11-18 03:56:14.567335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.999 [2024-11-18 03:56:14.567394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.999 [2024-11-18 03:56:14.567413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:18.259 [2024-11-18 03:56:14.789116] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.641 03:56:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:19.641 00:07:19.641 real 0m3.915s 00:07:19.641 user 0m4.367s 00:07:19.641 sys 0m1.064s 00:07:19.642 03:56:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.642 03:56:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:19.642 ************************************ 00:07:19.642 END TEST raid_function_test_raid0 00:07:19.642 ************************************ 00:07:19.642 03:56:16 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:19.642 03:56:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:19.642 03:56:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.642 03:56:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.642 ************************************ 00:07:19.642 START TEST raid_function_test_concat 00:07:19.642 ************************************ 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60450 00:07:19.642 Process raid pid: 60450 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60450' 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60450 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60450 ']' 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:19.642 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:19.642 [2024-11-18 03:56:16.121024] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:19.642 [2024-11-18 03:56:16.121157] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.642 [2024-11-18 03:56:16.275308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.902 [2024-11-18 03:56:16.415227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.161 [2024-11-18 03:56:16.662452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.161 [2024-11-18 03:56:16.662508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.421 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.421 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:20.421 03:56:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:20.421 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.421 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:20.421 Base_1 00:07:20.421 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.421 03:56:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:20.421 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.421 03:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:20.421 Base_2 00:07:20.421 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.421 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:20.421 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.421 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:20.421 [2024-11-18 03:56:17.050499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:20.421 [2024-11-18 03:56:17.052707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:20.421 [2024-11-18 03:56:17.052785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:20.421 [2024-11-18 03:56:17.052798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:20.421 [2024-11-18 03:56:17.053080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:20.421 [2024-11-18 03:56:17.053248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:20.421 [2024-11-18 03:56:17.053264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:20.421 [2024-11-18 03:56:17.053456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.421 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.421 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:20.421 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:20.421 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.421 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:20.680 [2024-11-18 03:56:17.290160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:20.680 /dev/nbd0 00:07:20.680 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.939 1+0 records in 00:07:20.939 1+0 records out 00:07:20.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413806 s, 9.9 MB/s 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:20.939 { 00:07:20.939 "nbd_device": "/dev/nbd0", 00:07:20.939 "bdev_name": "raid" 00:07:20.939 } 00:07:20.939 ]' 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:20.939 { 00:07:20.939 "nbd_device": "/dev/nbd0", 00:07:20.939 "bdev_name": "raid" 00:07:20.939 } 00:07:20.939 ]' 00:07:20.939 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.198 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:21.198 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:21.198 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.198 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:21.198 03:56:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:21.198 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:21.198 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:21.199 4096+0 records in 00:07:21.199 4096+0 records out 00:07:21.199 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0334449 s, 62.7 MB/s 00:07:21.199 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:21.459 4096+0 records in 00:07:21.459 4096+0 records out 00:07:21.459 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.235494 s, 8.9 MB/s 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:21.459 128+0 records in 00:07:21.459 128+0 records out 00:07:21.459 65536 bytes (66 kB, 64 KiB) copied, 0.00111379 s, 58.8 MB/s 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:21.459 2035+0 records in 00:07:21.459 2035+0 records out 00:07:21.459 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0128651 s, 81.0 MB/s 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:21.459 456+0 records in 00:07:21.459 456+0 records out 00:07:21.459 233472 bytes (233 kB, 228 KiB) copied, 0.00367606 s, 63.5 MB/s 00:07:21.459 03:56:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.459 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:21.719 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.719 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.719 [2024-11-18 03:56:18.221816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.719 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.719 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.719 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.719 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.719 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:21.719 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.719 03:56:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:21.719 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:21.719 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60450 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60450 ']' 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60450 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60450 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.979 killing process with pid 60450 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60450' 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60450 00:07:21.979 [2024-11-18 03:56:18.541218] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.979 [2024-11-18 03:56:18.541349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.979 03:56:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60450 00:07:21.979 [2024-11-18 03:56:18.541410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.979 [2024-11-18 03:56:18.541424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:22.240 [2024-11-18 03:56:18.763582] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.620 03:56:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:23.620 00:07:23.620 real 0m3.893s 00:07:23.620 user 0m4.403s 00:07:23.620 sys 0m1.023s 00:07:23.620 03:56:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.620 03:56:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:23.620 ************************************ 00:07:23.620 END TEST raid_function_test_concat 00:07:23.620 ************************************ 00:07:23.620 03:56:19 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:23.620 03:56:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.620 03:56:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.620 03:56:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.620 ************************************ 00:07:23.620 START TEST raid0_resize_test 00:07:23.620 ************************************ 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60579 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:23.620 Process raid pid: 60579 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60579' 00:07:23.620 03:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60579 00:07:23.621 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60579 ']' 00:07:23.621 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.621 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.621 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.621 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.621 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.621 [2024-11-18 03:56:20.080732] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:23.621 [2024-11-18 03:56:20.080875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.621 [2024-11-18 03:56:20.238192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.881 [2024-11-18 03:56:20.344567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.140 [2024-11-18 03:56:20.538409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.140 [2024-11-18 03:56:20.538449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.401 Base_1 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.401 Base_2 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.401 [2024-11-18 03:56:20.937396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:24.401 [2024-11-18 03:56:20.939386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:24.401 [2024-11-18 03:56:20.939454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:24.401 [2024-11-18 03:56:20.939467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:24.401 [2024-11-18 03:56:20.939727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:24.401 [2024-11-18 03:56:20.939922] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:24.401 [2024-11-18 03:56:20.939940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:24.401 [2024-11-18 03:56:20.940107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.401 [2024-11-18 03:56:20.949349] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:24.401 [2024-11-18 03:56:20.949381] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:24.401 true 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.401 [2024-11-18 03:56:20.965490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.401 03:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.401 [2024-11-18 03:56:21.013245] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:24.401 [2024-11-18 03:56:21.013272] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:24.401 [2024-11-18 03:56:21.013298] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:24.401 true 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.401 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.401 [2024-11-18 03:56:21.029391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60579 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60579 ']' 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60579 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60579 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.661 killing process with pid 60579 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60579' 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60579 00:07:24.661 [2024-11-18 03:56:21.102673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.661 [2024-11-18 03:56:21.102765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.661 [2024-11-18 03:56:21.102817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.661 [2024-11-18 03:56:21.102844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:24.661 03:56:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60579 00:07:24.661 [2024-11-18 03:56:21.119382] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.602 03:56:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:25.602 00:07:25.602 real 0m2.176s 00:07:25.602 user 0m2.313s 00:07:25.602 sys 0m0.329s 00:07:25.602 03:56:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.602 03:56:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.602 ************************************ 00:07:25.602 END TEST raid0_resize_test 00:07:25.602 ************************************ 00:07:25.602 03:56:22 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:25.602 03:56:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.602 03:56:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.602 03:56:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.602 ************************************ 00:07:25.602 START TEST raid1_resize_test 00:07:25.602 ************************************ 00:07:25.602 03:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:25.602 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:25.602 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:25.602 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:25.602 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:25.602 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:25.602 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:25.602 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:25.862 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:25.862 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60635 00:07:25.862 Process raid pid: 60635 00:07:25.862 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60635' 00:07:25.862 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.862 03:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60635 00:07:25.862 03:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60635 ']' 00:07:25.862 03:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.862 03:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.862 03:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.862 03:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.862 03:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.862 [2024-11-18 03:56:22.324724] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:25.862 [2024-11-18 03:56:22.324870] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.862 [2024-11-18 03:56:22.482588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.122 [2024-11-18 03:56:22.589852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.382 [2024-11-18 03:56:22.778970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.382 [2024-11-18 03:56:22.779005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.642 Base_1 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.642 Base_2 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.642 [2024-11-18 03:56:23.173492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:26.642 [2024-11-18 03:56:23.175210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:26.642 [2024-11-18 03:56:23.175292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:26.642 [2024-11-18 03:56:23.175305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:26.642 [2024-11-18 03:56:23.175543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:26.642 [2024-11-18 03:56:23.175698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:26.642 [2024-11-18 03:56:23.175715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:26.642 [2024-11-18 03:56:23.175882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.642 [2024-11-18 03:56:23.185463] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:26.642 [2024-11-18 03:56:23.185498] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:26.642 true 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.642 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.643 [2024-11-18 03:56:23.197595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.643 [2024-11-18 03:56:23.245335] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:26.643 [2024-11-18 03:56:23.245362] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:26.643 [2024-11-18 03:56:23.245388] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:26.643 true 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:26.643 [2024-11-18 03:56:23.257495] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.643 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60635 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60635 ']' 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60635 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60635 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.903 killing process with pid 60635 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60635' 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60635 00:07:26.903 [2024-11-18 03:56:23.347276] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.903 [2024-11-18 03:56:23.347372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.903 03:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60635 00:07:26.903 [2024-11-18 03:56:23.347895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.903 [2024-11-18 03:56:23.347921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:26.903 [2024-11-18 03:56:23.364991] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.841 03:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:27.841 00:07:27.841 real 0m2.171s 00:07:27.841 user 0m2.307s 00:07:27.841 sys 0m0.327s 00:07:27.841 03:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.841 03:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.841 ************************************ 00:07:27.841 END TEST raid1_resize_test 00:07:27.841 ************************************ 00:07:27.841 03:56:24 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:27.841 03:56:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:27.841 03:56:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:27.841 03:56:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:27.841 03:56:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.841 03:56:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.841 ************************************ 00:07:27.841 START TEST raid_state_function_test 00:07:27.841 ************************************ 00:07:27.841 03:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:27.841 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:27.841 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:27.841 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:27.841 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60692 00:07:28.100 Process raid pid: 60692 00:07:28.100 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60692' 00:07:28.101 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:28.101 03:56:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60692 00:07:28.101 03:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60692 ']' 00:07:28.101 03:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.101 03:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.101 03:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.101 03:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.101 03:56:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.101 [2024-11-18 03:56:24.569741] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:28.101 [2024-11-18 03:56:24.569882] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.101 [2024-11-18 03:56:24.726497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.360 [2024-11-18 03:56:24.834489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.619 [2024-11-18 03:56:25.021415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.619 [2024-11-18 03:56:25.021454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.879 [2024-11-18 03:56:25.389245] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.879 [2024-11-18 03:56:25.389301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.879 [2024-11-18 03:56:25.389311] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.879 [2024-11-18 03:56:25.389321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.879 "name": "Existed_Raid", 00:07:28.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.879 "strip_size_kb": 64, 00:07:28.879 "state": "configuring", 00:07:28.879 "raid_level": "raid0", 00:07:28.879 "superblock": false, 00:07:28.879 "num_base_bdevs": 2, 00:07:28.879 "num_base_bdevs_discovered": 0, 00:07:28.879 "num_base_bdevs_operational": 2, 00:07:28.879 "base_bdevs_list": [ 00:07:28.879 { 00:07:28.879 "name": "BaseBdev1", 00:07:28.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.879 "is_configured": false, 00:07:28.879 "data_offset": 0, 00:07:28.879 "data_size": 0 00:07:28.879 }, 00:07:28.879 { 00:07:28.879 "name": "BaseBdev2", 00:07:28.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.879 "is_configured": false, 00:07:28.879 "data_offset": 0, 00:07:28.879 "data_size": 0 00:07:28.879 } 00:07:28.879 ] 00:07:28.879 }' 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.879 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.448 [2024-11-18 03:56:25.836420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.448 [2024-11-18 03:56:25.836459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.448 [2024-11-18 03:56:25.844392] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:29.448 [2024-11-18 03:56:25.844437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:29.448 [2024-11-18 03:56:25.844447] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.448 [2024-11-18 03:56:25.844458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.448 [2024-11-18 03:56:25.889528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.448 BaseBdev1 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:29.448 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.449 [ 00:07:29.449 { 00:07:29.449 "name": "BaseBdev1", 00:07:29.449 "aliases": [ 00:07:29.449 "a235ca68-8ce2-42c3-84c7-dfc11adca57f" 00:07:29.449 ], 00:07:29.449 "product_name": "Malloc disk", 00:07:29.449 "block_size": 512, 00:07:29.449 "num_blocks": 65536, 00:07:29.449 "uuid": "a235ca68-8ce2-42c3-84c7-dfc11adca57f", 00:07:29.449 "assigned_rate_limits": { 00:07:29.449 "rw_ios_per_sec": 0, 00:07:29.449 "rw_mbytes_per_sec": 0, 00:07:29.449 "r_mbytes_per_sec": 0, 00:07:29.449 "w_mbytes_per_sec": 0 00:07:29.449 }, 00:07:29.449 "claimed": true, 00:07:29.449 "claim_type": "exclusive_write", 00:07:29.449 "zoned": false, 00:07:29.449 "supported_io_types": { 00:07:29.449 "read": true, 00:07:29.449 "write": true, 00:07:29.449 "unmap": true, 00:07:29.449 "flush": true, 00:07:29.449 "reset": true, 00:07:29.449 "nvme_admin": false, 00:07:29.449 "nvme_io": false, 00:07:29.449 "nvme_io_md": false, 00:07:29.449 "write_zeroes": true, 00:07:29.449 "zcopy": true, 00:07:29.449 "get_zone_info": false, 00:07:29.449 "zone_management": false, 00:07:29.449 "zone_append": false, 00:07:29.449 "compare": false, 00:07:29.449 "compare_and_write": false, 00:07:29.449 "abort": true, 00:07:29.449 "seek_hole": false, 00:07:29.449 "seek_data": false, 00:07:29.449 "copy": true, 00:07:29.449 "nvme_iov_md": false 00:07:29.449 }, 00:07:29.449 "memory_domains": [ 00:07:29.449 { 00:07:29.449 "dma_device_id": "system", 00:07:29.449 "dma_device_type": 1 00:07:29.449 }, 00:07:29.449 { 00:07:29.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.449 "dma_device_type": 2 00:07:29.449 } 00:07:29.449 ], 00:07:29.449 "driver_specific": {} 00:07:29.449 } 00:07:29.449 ] 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.449 "name": "Existed_Raid", 00:07:29.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.449 "strip_size_kb": 64, 00:07:29.449 "state": "configuring", 00:07:29.449 "raid_level": "raid0", 00:07:29.449 "superblock": false, 00:07:29.449 "num_base_bdevs": 2, 00:07:29.449 "num_base_bdevs_discovered": 1, 00:07:29.449 "num_base_bdevs_operational": 2, 00:07:29.449 "base_bdevs_list": [ 00:07:29.449 { 00:07:29.449 "name": "BaseBdev1", 00:07:29.449 "uuid": "a235ca68-8ce2-42c3-84c7-dfc11adca57f", 00:07:29.449 "is_configured": true, 00:07:29.449 "data_offset": 0, 00:07:29.449 "data_size": 65536 00:07:29.449 }, 00:07:29.449 { 00:07:29.449 "name": "BaseBdev2", 00:07:29.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.449 "is_configured": false, 00:07:29.449 "data_offset": 0, 00:07:29.449 "data_size": 0 00:07:29.449 } 00:07:29.449 ] 00:07:29.449 }' 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.449 03:56:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.708 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.708 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.708 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.709 [2024-11-18 03:56:26.308914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.709 [2024-11-18 03:56:26.308974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.709 [2024-11-18 03:56:26.320903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.709 [2024-11-18 03:56:26.322681] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.709 [2024-11-18 03:56:26.322723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.709 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.968 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.968 "name": "Existed_Raid", 00:07:29.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.968 "strip_size_kb": 64, 00:07:29.968 "state": "configuring", 00:07:29.968 "raid_level": "raid0", 00:07:29.968 "superblock": false, 00:07:29.968 "num_base_bdevs": 2, 00:07:29.968 "num_base_bdevs_discovered": 1, 00:07:29.968 "num_base_bdevs_operational": 2, 00:07:29.968 "base_bdevs_list": [ 00:07:29.968 { 00:07:29.968 "name": "BaseBdev1", 00:07:29.968 "uuid": "a235ca68-8ce2-42c3-84c7-dfc11adca57f", 00:07:29.968 "is_configured": true, 00:07:29.968 "data_offset": 0, 00:07:29.968 "data_size": 65536 00:07:29.968 }, 00:07:29.968 { 00:07:29.968 "name": "BaseBdev2", 00:07:29.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.968 "is_configured": false, 00:07:29.968 "data_offset": 0, 00:07:29.968 "data_size": 0 00:07:29.968 } 00:07:29.968 ] 00:07:29.968 }' 00:07:29.968 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.968 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.228 [2024-11-18 03:56:26.792728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.228 [2024-11-18 03:56:26.792775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:30.228 [2024-11-18 03:56:26.792801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:30.228 [2024-11-18 03:56:26.793074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:30.228 [2024-11-18 03:56:26.793242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:30.228 [2024-11-18 03:56:26.793264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:30.228 [2024-11-18 03:56:26.793512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.228 BaseBdev2 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.228 [ 00:07:30.228 { 00:07:30.228 "name": "BaseBdev2", 00:07:30.228 "aliases": [ 00:07:30.228 "a4751140-5264-4bf3-82d8-2d4ae4127fb3" 00:07:30.228 ], 00:07:30.228 "product_name": "Malloc disk", 00:07:30.228 "block_size": 512, 00:07:30.228 "num_blocks": 65536, 00:07:30.228 "uuid": "a4751140-5264-4bf3-82d8-2d4ae4127fb3", 00:07:30.228 "assigned_rate_limits": { 00:07:30.228 "rw_ios_per_sec": 0, 00:07:30.228 "rw_mbytes_per_sec": 0, 00:07:30.228 "r_mbytes_per_sec": 0, 00:07:30.228 "w_mbytes_per_sec": 0 00:07:30.228 }, 00:07:30.228 "claimed": true, 00:07:30.228 "claim_type": "exclusive_write", 00:07:30.228 "zoned": false, 00:07:30.228 "supported_io_types": { 00:07:30.228 "read": true, 00:07:30.228 "write": true, 00:07:30.228 "unmap": true, 00:07:30.228 "flush": true, 00:07:30.228 "reset": true, 00:07:30.228 "nvme_admin": false, 00:07:30.228 "nvme_io": false, 00:07:30.228 "nvme_io_md": false, 00:07:30.228 "write_zeroes": true, 00:07:30.228 "zcopy": true, 00:07:30.228 "get_zone_info": false, 00:07:30.228 "zone_management": false, 00:07:30.228 "zone_append": false, 00:07:30.228 "compare": false, 00:07:30.228 "compare_and_write": false, 00:07:30.228 "abort": true, 00:07:30.228 "seek_hole": false, 00:07:30.228 "seek_data": false, 00:07:30.228 "copy": true, 00:07:30.228 "nvme_iov_md": false 00:07:30.228 }, 00:07:30.228 "memory_domains": [ 00:07:30.228 { 00:07:30.228 "dma_device_id": "system", 00:07:30.228 "dma_device_type": 1 00:07:30.228 }, 00:07:30.228 { 00:07:30.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.228 "dma_device_type": 2 00:07:30.228 } 00:07:30.228 ], 00:07:30.228 "driver_specific": {} 00:07:30.228 } 00:07:30.228 ] 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.228 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.488 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.488 "name": "Existed_Raid", 00:07:30.488 "uuid": "1d091c3c-2093-499d-9edd-f9a92aabc513", 00:07:30.488 "strip_size_kb": 64, 00:07:30.488 "state": "online", 00:07:30.488 "raid_level": "raid0", 00:07:30.488 "superblock": false, 00:07:30.488 "num_base_bdevs": 2, 00:07:30.488 "num_base_bdevs_discovered": 2, 00:07:30.488 "num_base_bdevs_operational": 2, 00:07:30.488 "base_bdevs_list": [ 00:07:30.488 { 00:07:30.488 "name": "BaseBdev1", 00:07:30.488 "uuid": "a235ca68-8ce2-42c3-84c7-dfc11adca57f", 00:07:30.488 "is_configured": true, 00:07:30.488 "data_offset": 0, 00:07:30.488 "data_size": 65536 00:07:30.488 }, 00:07:30.488 { 00:07:30.488 "name": "BaseBdev2", 00:07:30.488 "uuid": "a4751140-5264-4bf3-82d8-2d4ae4127fb3", 00:07:30.488 "is_configured": true, 00:07:30.488 "data_offset": 0, 00:07:30.488 "data_size": 65536 00:07:30.488 } 00:07:30.488 ] 00:07:30.488 }' 00:07:30.488 03:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.488 03:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.749 [2024-11-18 03:56:27.272228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:30.749 "name": "Existed_Raid", 00:07:30.749 "aliases": [ 00:07:30.749 "1d091c3c-2093-499d-9edd-f9a92aabc513" 00:07:30.749 ], 00:07:30.749 "product_name": "Raid Volume", 00:07:30.749 "block_size": 512, 00:07:30.749 "num_blocks": 131072, 00:07:30.749 "uuid": "1d091c3c-2093-499d-9edd-f9a92aabc513", 00:07:30.749 "assigned_rate_limits": { 00:07:30.749 "rw_ios_per_sec": 0, 00:07:30.749 "rw_mbytes_per_sec": 0, 00:07:30.749 "r_mbytes_per_sec": 0, 00:07:30.749 "w_mbytes_per_sec": 0 00:07:30.749 }, 00:07:30.749 "claimed": false, 00:07:30.749 "zoned": false, 00:07:30.749 "supported_io_types": { 00:07:30.749 "read": true, 00:07:30.749 "write": true, 00:07:30.749 "unmap": true, 00:07:30.749 "flush": true, 00:07:30.749 "reset": true, 00:07:30.749 "nvme_admin": false, 00:07:30.749 "nvme_io": false, 00:07:30.749 "nvme_io_md": false, 00:07:30.749 "write_zeroes": true, 00:07:30.749 "zcopy": false, 00:07:30.749 "get_zone_info": false, 00:07:30.749 "zone_management": false, 00:07:30.749 "zone_append": false, 00:07:30.749 "compare": false, 00:07:30.749 "compare_and_write": false, 00:07:30.749 "abort": false, 00:07:30.749 "seek_hole": false, 00:07:30.749 "seek_data": false, 00:07:30.749 "copy": false, 00:07:30.749 "nvme_iov_md": false 00:07:30.749 }, 00:07:30.749 "memory_domains": [ 00:07:30.749 { 00:07:30.749 "dma_device_id": "system", 00:07:30.749 "dma_device_type": 1 00:07:30.749 }, 00:07:30.749 { 00:07:30.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.749 "dma_device_type": 2 00:07:30.749 }, 00:07:30.749 { 00:07:30.749 "dma_device_id": "system", 00:07:30.749 "dma_device_type": 1 00:07:30.749 }, 00:07:30.749 { 00:07:30.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.749 "dma_device_type": 2 00:07:30.749 } 00:07:30.749 ], 00:07:30.749 "driver_specific": { 00:07:30.749 "raid": { 00:07:30.749 "uuid": "1d091c3c-2093-499d-9edd-f9a92aabc513", 00:07:30.749 "strip_size_kb": 64, 00:07:30.749 "state": "online", 00:07:30.749 "raid_level": "raid0", 00:07:30.749 "superblock": false, 00:07:30.749 "num_base_bdevs": 2, 00:07:30.749 "num_base_bdevs_discovered": 2, 00:07:30.749 "num_base_bdevs_operational": 2, 00:07:30.749 "base_bdevs_list": [ 00:07:30.749 { 00:07:30.749 "name": "BaseBdev1", 00:07:30.749 "uuid": "a235ca68-8ce2-42c3-84c7-dfc11adca57f", 00:07:30.749 "is_configured": true, 00:07:30.749 "data_offset": 0, 00:07:30.749 "data_size": 65536 00:07:30.749 }, 00:07:30.749 { 00:07:30.749 "name": "BaseBdev2", 00:07:30.749 "uuid": "a4751140-5264-4bf3-82d8-2d4ae4127fb3", 00:07:30.749 "is_configured": true, 00:07:30.749 "data_offset": 0, 00:07:30.749 "data_size": 65536 00:07:30.749 } 00:07:30.749 ] 00:07:30.749 } 00:07:30.749 } 00:07:30.749 }' 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:30.749 BaseBdev2' 00:07:30.749 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.014 [2024-11-18 03:56:27.515565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:31.014 [2024-11-18 03:56:27.515600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.014 [2024-11-18 03:56:27.515646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.014 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.279 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.279 "name": "Existed_Raid", 00:07:31.279 "uuid": "1d091c3c-2093-499d-9edd-f9a92aabc513", 00:07:31.279 "strip_size_kb": 64, 00:07:31.279 "state": "offline", 00:07:31.279 "raid_level": "raid0", 00:07:31.279 "superblock": false, 00:07:31.279 "num_base_bdevs": 2, 00:07:31.279 "num_base_bdevs_discovered": 1, 00:07:31.279 "num_base_bdevs_operational": 1, 00:07:31.279 "base_bdevs_list": [ 00:07:31.279 { 00:07:31.279 "name": null, 00:07:31.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.279 "is_configured": false, 00:07:31.279 "data_offset": 0, 00:07:31.279 "data_size": 65536 00:07:31.279 }, 00:07:31.279 { 00:07:31.279 "name": "BaseBdev2", 00:07:31.279 "uuid": "a4751140-5264-4bf3-82d8-2d4ae4127fb3", 00:07:31.279 "is_configured": true, 00:07:31.279 "data_offset": 0, 00:07:31.279 "data_size": 65536 00:07:31.279 } 00:07:31.280 ] 00:07:31.280 }' 00:07:31.280 03:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.280 03:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.538 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.538 [2024-11-18 03:56:28.071213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:31.539 [2024-11-18 03:56:28.071286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:31.539 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.539 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:31.539 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:31.539 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.539 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:31.539 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.539 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60692 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60692 ']' 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60692 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60692 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.798 killing process with pid 60692 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60692' 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60692 00:07:31.798 [2024-11-18 03:56:28.255275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.798 03:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60692 00:07:31.798 [2024-11-18 03:56:28.272341] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.735 03:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:32.735 00:07:32.735 real 0m4.843s 00:07:32.735 user 0m7.020s 00:07:32.735 sys 0m0.776s 00:07:32.735 03:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.735 03:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.735 ************************************ 00:07:32.735 END TEST raid_state_function_test 00:07:32.735 ************************************ 00:07:32.735 03:56:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:32.735 03:56:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:32.735 03:56:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.735 03:56:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.995 ************************************ 00:07:32.995 START TEST raid_state_function_test_sb 00:07:32.995 ************************************ 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:32.995 Process raid pid: 60945 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60945 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60945' 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60945 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60945 ']' 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.995 03:56:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.995 [2024-11-18 03:56:29.480128] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:32.995 [2024-11-18 03:56:29.480379] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.255 [2024-11-18 03:56:29.634870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.255 [2024-11-18 03:56:29.743265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.514 [2024-11-18 03:56:29.926898] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.514 [2024-11-18 03:56:29.927009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.773 [2024-11-18 03:56:30.304311] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:33.773 [2024-11-18 03:56:30.304432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:33.773 [2024-11-18 03:56:30.304463] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.773 [2024-11-18 03:56:30.304486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.773 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.774 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.774 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.774 "name": "Existed_Raid", 00:07:33.774 "uuid": "2e5d0726-4333-448d-8e2b-4a2d251b9a28", 00:07:33.774 "strip_size_kb": 64, 00:07:33.774 "state": "configuring", 00:07:33.774 "raid_level": "raid0", 00:07:33.774 "superblock": true, 00:07:33.774 "num_base_bdevs": 2, 00:07:33.774 "num_base_bdevs_discovered": 0, 00:07:33.774 "num_base_bdevs_operational": 2, 00:07:33.774 "base_bdevs_list": [ 00:07:33.774 { 00:07:33.774 "name": "BaseBdev1", 00:07:33.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.774 "is_configured": false, 00:07:33.774 "data_offset": 0, 00:07:33.774 "data_size": 0 00:07:33.774 }, 00:07:33.774 { 00:07:33.774 "name": "BaseBdev2", 00:07:33.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.774 "is_configured": false, 00:07:33.774 "data_offset": 0, 00:07:33.774 "data_size": 0 00:07:33.774 } 00:07:33.774 ] 00:07:33.774 }' 00:07:33.774 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.774 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.342 [2024-11-18 03:56:30.743527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:34.342 [2024-11-18 03:56:30.743565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.342 [2024-11-18 03:56:30.755509] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:34.342 [2024-11-18 03:56:30.755554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:34.342 [2024-11-18 03:56:30.755563] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:34.342 [2024-11-18 03:56:30.755574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.342 [2024-11-18 03:56:30.801963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:34.342 BaseBdev1 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.342 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.342 [ 00:07:34.342 { 00:07:34.342 "name": "BaseBdev1", 00:07:34.342 "aliases": [ 00:07:34.342 "b36275d0-676a-492d-8fbe-382e218bce36" 00:07:34.342 ], 00:07:34.342 "product_name": "Malloc disk", 00:07:34.342 "block_size": 512, 00:07:34.342 "num_blocks": 65536, 00:07:34.342 "uuid": "b36275d0-676a-492d-8fbe-382e218bce36", 00:07:34.342 "assigned_rate_limits": { 00:07:34.342 "rw_ios_per_sec": 0, 00:07:34.342 "rw_mbytes_per_sec": 0, 00:07:34.342 "r_mbytes_per_sec": 0, 00:07:34.342 "w_mbytes_per_sec": 0 00:07:34.342 }, 00:07:34.342 "claimed": true, 00:07:34.342 "claim_type": "exclusive_write", 00:07:34.342 "zoned": false, 00:07:34.342 "supported_io_types": { 00:07:34.342 "read": true, 00:07:34.342 "write": true, 00:07:34.342 "unmap": true, 00:07:34.342 "flush": true, 00:07:34.342 "reset": true, 00:07:34.342 "nvme_admin": false, 00:07:34.342 "nvme_io": false, 00:07:34.342 "nvme_io_md": false, 00:07:34.342 "write_zeroes": true, 00:07:34.342 "zcopy": true, 00:07:34.342 "get_zone_info": false, 00:07:34.342 "zone_management": false, 00:07:34.342 "zone_append": false, 00:07:34.342 "compare": false, 00:07:34.342 "compare_and_write": false, 00:07:34.342 "abort": true, 00:07:34.342 "seek_hole": false, 00:07:34.342 "seek_data": false, 00:07:34.342 "copy": true, 00:07:34.342 "nvme_iov_md": false 00:07:34.342 }, 00:07:34.342 "memory_domains": [ 00:07:34.342 { 00:07:34.342 "dma_device_id": "system", 00:07:34.342 "dma_device_type": 1 00:07:34.343 }, 00:07:34.343 { 00:07:34.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.343 "dma_device_type": 2 00:07:34.343 } 00:07:34.343 ], 00:07:34.343 "driver_specific": {} 00:07:34.343 } 00:07:34.343 ] 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.343 "name": "Existed_Raid", 00:07:34.343 "uuid": "a92cc297-3802-41a2-9ec8-04c659e57fdb", 00:07:34.343 "strip_size_kb": 64, 00:07:34.343 "state": "configuring", 00:07:34.343 "raid_level": "raid0", 00:07:34.343 "superblock": true, 00:07:34.343 "num_base_bdevs": 2, 00:07:34.343 "num_base_bdevs_discovered": 1, 00:07:34.343 "num_base_bdevs_operational": 2, 00:07:34.343 "base_bdevs_list": [ 00:07:34.343 { 00:07:34.343 "name": "BaseBdev1", 00:07:34.343 "uuid": "b36275d0-676a-492d-8fbe-382e218bce36", 00:07:34.343 "is_configured": true, 00:07:34.343 "data_offset": 2048, 00:07:34.343 "data_size": 63488 00:07:34.343 }, 00:07:34.343 { 00:07:34.343 "name": "BaseBdev2", 00:07:34.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.343 "is_configured": false, 00:07:34.343 "data_offset": 0, 00:07:34.343 "data_size": 0 00:07:34.343 } 00:07:34.343 ] 00:07:34.343 }' 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.343 03:56:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.910 [2024-11-18 03:56:31.277198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:34.910 [2024-11-18 03:56:31.277249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.910 [2024-11-18 03:56:31.289238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:34.910 [2024-11-18 03:56:31.291052] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:34.910 [2024-11-18 03:56:31.291090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.910 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.911 "name": "Existed_Raid", 00:07:34.911 "uuid": "85f75843-eccb-4067-a049-306814cd9af0", 00:07:34.911 "strip_size_kb": 64, 00:07:34.911 "state": "configuring", 00:07:34.911 "raid_level": "raid0", 00:07:34.911 "superblock": true, 00:07:34.911 "num_base_bdevs": 2, 00:07:34.911 "num_base_bdevs_discovered": 1, 00:07:34.911 "num_base_bdevs_operational": 2, 00:07:34.911 "base_bdevs_list": [ 00:07:34.911 { 00:07:34.911 "name": "BaseBdev1", 00:07:34.911 "uuid": "b36275d0-676a-492d-8fbe-382e218bce36", 00:07:34.911 "is_configured": true, 00:07:34.911 "data_offset": 2048, 00:07:34.911 "data_size": 63488 00:07:34.911 }, 00:07:34.911 { 00:07:34.911 "name": "BaseBdev2", 00:07:34.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.911 "is_configured": false, 00:07:34.911 "data_offset": 0, 00:07:34.911 "data_size": 0 00:07:34.911 } 00:07:34.911 ] 00:07:34.911 }' 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.911 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.170 [2024-11-18 03:56:31.782504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.170 [2024-11-18 03:56:31.782870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:35.170 [2024-11-18 03:56:31.782923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.170 [2024-11-18 03:56:31.783219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:35.170 [2024-11-18 03:56:31.783423] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:35.170 [2024-11-18 03:56:31.783469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:35.170 [2024-11-18 03:56:31.783636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.170 BaseBdev2 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.170 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.430 [ 00:07:35.430 { 00:07:35.430 "name": "BaseBdev2", 00:07:35.430 "aliases": [ 00:07:35.430 "563cdc01-468a-4fa5-ac1b-91b0e6330cd0" 00:07:35.430 ], 00:07:35.430 "product_name": "Malloc disk", 00:07:35.430 "block_size": 512, 00:07:35.430 "num_blocks": 65536, 00:07:35.430 "uuid": "563cdc01-468a-4fa5-ac1b-91b0e6330cd0", 00:07:35.430 "assigned_rate_limits": { 00:07:35.430 "rw_ios_per_sec": 0, 00:07:35.430 "rw_mbytes_per_sec": 0, 00:07:35.430 "r_mbytes_per_sec": 0, 00:07:35.430 "w_mbytes_per_sec": 0 00:07:35.430 }, 00:07:35.430 "claimed": true, 00:07:35.430 "claim_type": "exclusive_write", 00:07:35.430 "zoned": false, 00:07:35.430 "supported_io_types": { 00:07:35.430 "read": true, 00:07:35.430 "write": true, 00:07:35.430 "unmap": true, 00:07:35.430 "flush": true, 00:07:35.430 "reset": true, 00:07:35.430 "nvme_admin": false, 00:07:35.430 "nvme_io": false, 00:07:35.430 "nvme_io_md": false, 00:07:35.430 "write_zeroes": true, 00:07:35.430 "zcopy": true, 00:07:35.430 "get_zone_info": false, 00:07:35.430 "zone_management": false, 00:07:35.430 "zone_append": false, 00:07:35.430 "compare": false, 00:07:35.430 "compare_and_write": false, 00:07:35.430 "abort": true, 00:07:35.430 "seek_hole": false, 00:07:35.430 "seek_data": false, 00:07:35.430 "copy": true, 00:07:35.430 "nvme_iov_md": false 00:07:35.430 }, 00:07:35.430 "memory_domains": [ 00:07:35.430 { 00:07:35.430 "dma_device_id": "system", 00:07:35.430 "dma_device_type": 1 00:07:35.430 }, 00:07:35.430 { 00:07:35.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.430 "dma_device_type": 2 00:07:35.430 } 00:07:35.430 ], 00:07:35.430 "driver_specific": {} 00:07:35.430 } 00:07:35.430 ] 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.430 "name": "Existed_Raid", 00:07:35.430 "uuid": "85f75843-eccb-4067-a049-306814cd9af0", 00:07:35.430 "strip_size_kb": 64, 00:07:35.430 "state": "online", 00:07:35.430 "raid_level": "raid0", 00:07:35.430 "superblock": true, 00:07:35.430 "num_base_bdevs": 2, 00:07:35.430 "num_base_bdevs_discovered": 2, 00:07:35.430 "num_base_bdevs_operational": 2, 00:07:35.430 "base_bdevs_list": [ 00:07:35.430 { 00:07:35.430 "name": "BaseBdev1", 00:07:35.430 "uuid": "b36275d0-676a-492d-8fbe-382e218bce36", 00:07:35.430 "is_configured": true, 00:07:35.430 "data_offset": 2048, 00:07:35.430 "data_size": 63488 00:07:35.430 }, 00:07:35.430 { 00:07:35.430 "name": "BaseBdev2", 00:07:35.430 "uuid": "563cdc01-468a-4fa5-ac1b-91b0e6330cd0", 00:07:35.430 "is_configured": true, 00:07:35.430 "data_offset": 2048, 00:07:35.430 "data_size": 63488 00:07:35.430 } 00:07:35.430 ] 00:07:35.430 }' 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.430 03:56:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.690 [2024-11-18 03:56:32.289946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:35.690 "name": "Existed_Raid", 00:07:35.690 "aliases": [ 00:07:35.690 "85f75843-eccb-4067-a049-306814cd9af0" 00:07:35.690 ], 00:07:35.690 "product_name": "Raid Volume", 00:07:35.690 "block_size": 512, 00:07:35.690 "num_blocks": 126976, 00:07:35.690 "uuid": "85f75843-eccb-4067-a049-306814cd9af0", 00:07:35.690 "assigned_rate_limits": { 00:07:35.690 "rw_ios_per_sec": 0, 00:07:35.690 "rw_mbytes_per_sec": 0, 00:07:35.690 "r_mbytes_per_sec": 0, 00:07:35.690 "w_mbytes_per_sec": 0 00:07:35.690 }, 00:07:35.690 "claimed": false, 00:07:35.690 "zoned": false, 00:07:35.690 "supported_io_types": { 00:07:35.690 "read": true, 00:07:35.690 "write": true, 00:07:35.690 "unmap": true, 00:07:35.690 "flush": true, 00:07:35.690 "reset": true, 00:07:35.690 "nvme_admin": false, 00:07:35.690 "nvme_io": false, 00:07:35.690 "nvme_io_md": false, 00:07:35.690 "write_zeroes": true, 00:07:35.690 "zcopy": false, 00:07:35.690 "get_zone_info": false, 00:07:35.690 "zone_management": false, 00:07:35.690 "zone_append": false, 00:07:35.690 "compare": false, 00:07:35.690 "compare_and_write": false, 00:07:35.690 "abort": false, 00:07:35.690 "seek_hole": false, 00:07:35.690 "seek_data": false, 00:07:35.690 "copy": false, 00:07:35.690 "nvme_iov_md": false 00:07:35.690 }, 00:07:35.690 "memory_domains": [ 00:07:35.690 { 00:07:35.690 "dma_device_id": "system", 00:07:35.690 "dma_device_type": 1 00:07:35.690 }, 00:07:35.690 { 00:07:35.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.690 "dma_device_type": 2 00:07:35.690 }, 00:07:35.690 { 00:07:35.690 "dma_device_id": "system", 00:07:35.690 "dma_device_type": 1 00:07:35.690 }, 00:07:35.690 { 00:07:35.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.690 "dma_device_type": 2 00:07:35.690 } 00:07:35.690 ], 00:07:35.690 "driver_specific": { 00:07:35.690 "raid": { 00:07:35.690 "uuid": "85f75843-eccb-4067-a049-306814cd9af0", 00:07:35.690 "strip_size_kb": 64, 00:07:35.690 "state": "online", 00:07:35.690 "raid_level": "raid0", 00:07:35.690 "superblock": true, 00:07:35.690 "num_base_bdevs": 2, 00:07:35.690 "num_base_bdevs_discovered": 2, 00:07:35.690 "num_base_bdevs_operational": 2, 00:07:35.690 "base_bdevs_list": [ 00:07:35.690 { 00:07:35.690 "name": "BaseBdev1", 00:07:35.690 "uuid": "b36275d0-676a-492d-8fbe-382e218bce36", 00:07:35.690 "is_configured": true, 00:07:35.690 "data_offset": 2048, 00:07:35.690 "data_size": 63488 00:07:35.690 }, 00:07:35.690 { 00:07:35.690 "name": "BaseBdev2", 00:07:35.690 "uuid": "563cdc01-468a-4fa5-ac1b-91b0e6330cd0", 00:07:35.690 "is_configured": true, 00:07:35.690 "data_offset": 2048, 00:07:35.690 "data_size": 63488 00:07:35.690 } 00:07:35.690 ] 00:07:35.690 } 00:07:35.690 } 00:07:35.690 }' 00:07:35.690 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:35.950 BaseBdev2' 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.950 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.950 [2024-11-18 03:56:32.533361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:35.950 [2024-11-18 03:56:32.533411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.950 [2024-11-18 03:56:32.533472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:36.208 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.208 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:36.208 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:36.208 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:36.208 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:36.208 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.209 "name": "Existed_Raid", 00:07:36.209 "uuid": "85f75843-eccb-4067-a049-306814cd9af0", 00:07:36.209 "strip_size_kb": 64, 00:07:36.209 "state": "offline", 00:07:36.209 "raid_level": "raid0", 00:07:36.209 "superblock": true, 00:07:36.209 "num_base_bdevs": 2, 00:07:36.209 "num_base_bdevs_discovered": 1, 00:07:36.209 "num_base_bdevs_operational": 1, 00:07:36.209 "base_bdevs_list": [ 00:07:36.209 { 00:07:36.209 "name": null, 00:07:36.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.209 "is_configured": false, 00:07:36.209 "data_offset": 0, 00:07:36.209 "data_size": 63488 00:07:36.209 }, 00:07:36.209 { 00:07:36.209 "name": "BaseBdev2", 00:07:36.209 "uuid": "563cdc01-468a-4fa5-ac1b-91b0e6330cd0", 00:07:36.209 "is_configured": true, 00:07:36.209 "data_offset": 2048, 00:07:36.209 "data_size": 63488 00:07:36.209 } 00:07:36.209 ] 00:07:36.209 }' 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.209 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.468 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:36.468 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:36.468 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.468 03:56:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:36.468 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.468 03:56:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.468 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.468 03:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:36.469 03:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:36.469 03:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:36.469 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.469 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.469 [2024-11-18 03:56:33.043244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:36.469 [2024-11-18 03:56:33.043408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:36.728 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60945 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60945 ']' 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60945 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60945 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60945' 00:07:36.729 killing process with pid 60945 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60945 00:07:36.729 [2024-11-18 03:56:33.249541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:36.729 03:56:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60945 00:07:36.729 [2024-11-18 03:56:33.267058] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.111 03:56:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:38.111 00:07:38.111 real 0m5.055s 00:07:38.111 user 0m7.269s 00:07:38.111 sys 0m0.770s 00:07:38.111 03:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.111 03:56:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.111 ************************************ 00:07:38.111 END TEST raid_state_function_test_sb 00:07:38.111 ************************************ 00:07:38.111 03:56:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:38.111 03:56:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:38.111 03:56:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.111 03:56:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.111 ************************************ 00:07:38.111 START TEST raid_superblock_test 00:07:38.111 ************************************ 00:07:38.111 03:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:38.111 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61192 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61192 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61192 ']' 00:07:38.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.112 03:56:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.112 [2024-11-18 03:56:34.599553] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:38.112 [2024-11-18 03:56:34.599677] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61192 ] 00:07:38.374 [2024-11-18 03:56:34.751892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.374 [2024-11-18 03:56:34.892374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.634 [2024-11-18 03:56:35.128681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.634 [2024-11-18 03:56:35.128729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.893 malloc1 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.893 [2024-11-18 03:56:35.479592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:38.893 [2024-11-18 03:56:35.479768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.893 [2024-11-18 03:56:35.479813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:38.893 [2024-11-18 03:56:35.479864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.893 [2024-11-18 03:56:35.482242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.893 [2024-11-18 03:56:35.482320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:38.893 pt1 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.893 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.153 malloc2 00:07:39.153 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.153 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:39.153 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.153 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.153 [2024-11-18 03:56:35.545738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:39.153 [2024-11-18 03:56:35.545864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.153 [2024-11-18 03:56:35.545904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:39.153 [2024-11-18 03:56:35.545930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.153 [2024-11-18 03:56:35.548250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.153 [2024-11-18 03:56:35.548330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:39.153 pt2 00:07:39.153 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.153 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:39.153 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:39.153 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:39.153 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.153 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.154 [2024-11-18 03:56:35.557778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:39.154 [2024-11-18 03:56:35.559808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:39.154 [2024-11-18 03:56:35.559986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:39.154 [2024-11-18 03:56:35.560007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.154 [2024-11-18 03:56:35.560246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:39.154 [2024-11-18 03:56:35.560411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:39.154 [2024-11-18 03:56:35.560423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:39.154 [2024-11-18 03:56:35.560562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.154 "name": "raid_bdev1", 00:07:39.154 "uuid": "b7610adf-fe36-4bd5-9675-46af2eb714cb", 00:07:39.154 "strip_size_kb": 64, 00:07:39.154 "state": "online", 00:07:39.154 "raid_level": "raid0", 00:07:39.154 "superblock": true, 00:07:39.154 "num_base_bdevs": 2, 00:07:39.154 "num_base_bdevs_discovered": 2, 00:07:39.154 "num_base_bdevs_operational": 2, 00:07:39.154 "base_bdevs_list": [ 00:07:39.154 { 00:07:39.154 "name": "pt1", 00:07:39.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:39.154 "is_configured": true, 00:07:39.154 "data_offset": 2048, 00:07:39.154 "data_size": 63488 00:07:39.154 }, 00:07:39.154 { 00:07:39.154 "name": "pt2", 00:07:39.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.154 "is_configured": true, 00:07:39.154 "data_offset": 2048, 00:07:39.154 "data_size": 63488 00:07:39.154 } 00:07:39.154 ] 00:07:39.154 }' 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.154 03:56:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.414 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:39.414 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:39.414 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:39.414 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:39.414 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:39.414 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:39.414 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:39.414 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.414 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.414 [2024-11-18 03:56:36.025215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.414 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:39.414 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:39.674 "name": "raid_bdev1", 00:07:39.674 "aliases": [ 00:07:39.674 "b7610adf-fe36-4bd5-9675-46af2eb714cb" 00:07:39.674 ], 00:07:39.674 "product_name": "Raid Volume", 00:07:39.674 "block_size": 512, 00:07:39.674 "num_blocks": 126976, 00:07:39.674 "uuid": "b7610adf-fe36-4bd5-9675-46af2eb714cb", 00:07:39.674 "assigned_rate_limits": { 00:07:39.674 "rw_ios_per_sec": 0, 00:07:39.674 "rw_mbytes_per_sec": 0, 00:07:39.674 "r_mbytes_per_sec": 0, 00:07:39.674 "w_mbytes_per_sec": 0 00:07:39.674 }, 00:07:39.674 "claimed": false, 00:07:39.674 "zoned": false, 00:07:39.674 "supported_io_types": { 00:07:39.674 "read": true, 00:07:39.674 "write": true, 00:07:39.674 "unmap": true, 00:07:39.674 "flush": true, 00:07:39.674 "reset": true, 00:07:39.674 "nvme_admin": false, 00:07:39.674 "nvme_io": false, 00:07:39.674 "nvme_io_md": false, 00:07:39.674 "write_zeroes": true, 00:07:39.674 "zcopy": false, 00:07:39.674 "get_zone_info": false, 00:07:39.674 "zone_management": false, 00:07:39.674 "zone_append": false, 00:07:39.674 "compare": false, 00:07:39.674 "compare_and_write": false, 00:07:39.674 "abort": false, 00:07:39.674 "seek_hole": false, 00:07:39.674 "seek_data": false, 00:07:39.674 "copy": false, 00:07:39.674 "nvme_iov_md": false 00:07:39.674 }, 00:07:39.674 "memory_domains": [ 00:07:39.674 { 00:07:39.674 "dma_device_id": "system", 00:07:39.674 "dma_device_type": 1 00:07:39.674 }, 00:07:39.674 { 00:07:39.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.674 "dma_device_type": 2 00:07:39.674 }, 00:07:39.674 { 00:07:39.674 "dma_device_id": "system", 00:07:39.674 "dma_device_type": 1 00:07:39.674 }, 00:07:39.674 { 00:07:39.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.674 "dma_device_type": 2 00:07:39.674 } 00:07:39.674 ], 00:07:39.674 "driver_specific": { 00:07:39.674 "raid": { 00:07:39.674 "uuid": "b7610adf-fe36-4bd5-9675-46af2eb714cb", 00:07:39.674 "strip_size_kb": 64, 00:07:39.674 "state": "online", 00:07:39.674 "raid_level": "raid0", 00:07:39.674 "superblock": true, 00:07:39.674 "num_base_bdevs": 2, 00:07:39.674 "num_base_bdevs_discovered": 2, 00:07:39.674 "num_base_bdevs_operational": 2, 00:07:39.674 "base_bdevs_list": [ 00:07:39.674 { 00:07:39.674 "name": "pt1", 00:07:39.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:39.674 "is_configured": true, 00:07:39.674 "data_offset": 2048, 00:07:39.674 "data_size": 63488 00:07:39.674 }, 00:07:39.674 { 00:07:39.674 "name": "pt2", 00:07:39.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.674 "is_configured": true, 00:07:39.674 "data_offset": 2048, 00:07:39.674 "data_size": 63488 00:07:39.674 } 00:07:39.674 ] 00:07:39.674 } 00:07:39.674 } 00:07:39.674 }' 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:39.674 pt2' 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:39.674 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:39.675 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.675 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.675 [2024-11-18 03:56:36.272893] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.675 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.675 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b7610adf-fe36-4bd5-9675-46af2eb714cb 00:07:39.675 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b7610adf-fe36-4bd5-9675-46af2eb714cb ']' 00:07:39.675 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:39.675 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.675 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.935 [2024-11-18 03:56:36.316461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:39.935 [2024-11-18 03:56:36.316585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.935 [2024-11-18 03:56:36.316715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.935 [2024-11-18 03:56:36.316772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.935 [2024-11-18 03:56:36.316786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:39.935 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.935 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:39.935 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.935 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.936 [2024-11-18 03:56:36.424238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:39.936 [2024-11-18 03:56:36.426391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:39.936 [2024-11-18 03:56:36.426515] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:39.936 [2024-11-18 03:56:36.426602] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:39.936 [2024-11-18 03:56:36.426642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:39.936 [2024-11-18 03:56:36.426666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:39.936 request: 00:07:39.936 { 00:07:39.936 "name": "raid_bdev1", 00:07:39.936 "raid_level": "raid0", 00:07:39.936 "base_bdevs": [ 00:07:39.936 "malloc1", 00:07:39.936 "malloc2" 00:07:39.936 ], 00:07:39.936 "strip_size_kb": 64, 00:07:39.936 "superblock": false, 00:07:39.936 "method": "bdev_raid_create", 00:07:39.936 "req_id": 1 00:07:39.936 } 00:07:39.936 Got JSON-RPC error response 00:07:39.936 response: 00:07:39.936 { 00:07:39.936 "code": -17, 00:07:39.936 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:39.936 } 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.936 [2024-11-18 03:56:36.488117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:39.936 [2024-11-18 03:56:36.488207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.936 [2024-11-18 03:56:36.488242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:39.936 [2024-11-18 03:56:36.488268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.936 [2024-11-18 03:56:36.490686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.936 [2024-11-18 03:56:36.490758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:39.936 [2024-11-18 03:56:36.490861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:39.936 [2024-11-18 03:56:36.490952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:39.936 pt1 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.936 "name": "raid_bdev1", 00:07:39.936 "uuid": "b7610adf-fe36-4bd5-9675-46af2eb714cb", 00:07:39.936 "strip_size_kb": 64, 00:07:39.936 "state": "configuring", 00:07:39.936 "raid_level": "raid0", 00:07:39.936 "superblock": true, 00:07:39.936 "num_base_bdevs": 2, 00:07:39.936 "num_base_bdevs_discovered": 1, 00:07:39.936 "num_base_bdevs_operational": 2, 00:07:39.936 "base_bdevs_list": [ 00:07:39.936 { 00:07:39.936 "name": "pt1", 00:07:39.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:39.936 "is_configured": true, 00:07:39.936 "data_offset": 2048, 00:07:39.936 "data_size": 63488 00:07:39.936 }, 00:07:39.936 { 00:07:39.936 "name": null, 00:07:39.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.936 "is_configured": false, 00:07:39.936 "data_offset": 2048, 00:07:39.936 "data_size": 63488 00:07:39.936 } 00:07:39.936 ] 00:07:39.936 }' 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.936 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.505 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.506 [2024-11-18 03:56:36.955327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:40.506 [2024-11-18 03:56:36.955448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.506 [2024-11-18 03:56:36.955484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:40.506 [2024-11-18 03:56:36.955513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.506 [2024-11-18 03:56:36.956042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.506 [2024-11-18 03:56:36.956105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:40.506 [2024-11-18 03:56:36.956213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:40.506 [2024-11-18 03:56:36.956265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:40.506 [2024-11-18 03:56:36.956399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:40.506 [2024-11-18 03:56:36.956412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.506 [2024-11-18 03:56:36.956668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:40.506 [2024-11-18 03:56:36.956822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:40.506 [2024-11-18 03:56:36.956846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:40.506 [2024-11-18 03:56:36.956977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.506 pt2 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.506 03:56:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.506 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.506 "name": "raid_bdev1", 00:07:40.506 "uuid": "b7610adf-fe36-4bd5-9675-46af2eb714cb", 00:07:40.506 "strip_size_kb": 64, 00:07:40.506 "state": "online", 00:07:40.506 "raid_level": "raid0", 00:07:40.506 "superblock": true, 00:07:40.506 "num_base_bdevs": 2, 00:07:40.506 "num_base_bdevs_discovered": 2, 00:07:40.506 "num_base_bdevs_operational": 2, 00:07:40.506 "base_bdevs_list": [ 00:07:40.506 { 00:07:40.506 "name": "pt1", 00:07:40.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.506 "is_configured": true, 00:07:40.506 "data_offset": 2048, 00:07:40.506 "data_size": 63488 00:07:40.506 }, 00:07:40.506 { 00:07:40.506 "name": "pt2", 00:07:40.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.506 "is_configured": true, 00:07:40.506 "data_offset": 2048, 00:07:40.506 "data_size": 63488 00:07:40.506 } 00:07:40.506 ] 00:07:40.506 }' 00:07:40.506 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.506 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.765 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:40.765 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:40.765 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:40.765 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:40.765 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:40.765 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:40.765 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:40.765 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.765 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.765 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:40.765 [2024-11-18 03:56:37.387255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.765 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.024 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:41.024 "name": "raid_bdev1", 00:07:41.024 "aliases": [ 00:07:41.024 "b7610adf-fe36-4bd5-9675-46af2eb714cb" 00:07:41.024 ], 00:07:41.024 "product_name": "Raid Volume", 00:07:41.024 "block_size": 512, 00:07:41.024 "num_blocks": 126976, 00:07:41.024 "uuid": "b7610adf-fe36-4bd5-9675-46af2eb714cb", 00:07:41.024 "assigned_rate_limits": { 00:07:41.024 "rw_ios_per_sec": 0, 00:07:41.024 "rw_mbytes_per_sec": 0, 00:07:41.024 "r_mbytes_per_sec": 0, 00:07:41.024 "w_mbytes_per_sec": 0 00:07:41.024 }, 00:07:41.024 "claimed": false, 00:07:41.024 "zoned": false, 00:07:41.024 "supported_io_types": { 00:07:41.024 "read": true, 00:07:41.024 "write": true, 00:07:41.024 "unmap": true, 00:07:41.024 "flush": true, 00:07:41.024 "reset": true, 00:07:41.024 "nvme_admin": false, 00:07:41.024 "nvme_io": false, 00:07:41.024 "nvme_io_md": false, 00:07:41.024 "write_zeroes": true, 00:07:41.024 "zcopy": false, 00:07:41.024 "get_zone_info": false, 00:07:41.024 "zone_management": false, 00:07:41.024 "zone_append": false, 00:07:41.024 "compare": false, 00:07:41.024 "compare_and_write": false, 00:07:41.024 "abort": false, 00:07:41.024 "seek_hole": false, 00:07:41.024 "seek_data": false, 00:07:41.024 "copy": false, 00:07:41.024 "nvme_iov_md": false 00:07:41.024 }, 00:07:41.024 "memory_domains": [ 00:07:41.024 { 00:07:41.024 "dma_device_id": "system", 00:07:41.024 "dma_device_type": 1 00:07:41.024 }, 00:07:41.024 { 00:07:41.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.024 "dma_device_type": 2 00:07:41.024 }, 00:07:41.024 { 00:07:41.024 "dma_device_id": "system", 00:07:41.025 "dma_device_type": 1 00:07:41.025 }, 00:07:41.025 { 00:07:41.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.025 "dma_device_type": 2 00:07:41.025 } 00:07:41.025 ], 00:07:41.025 "driver_specific": { 00:07:41.025 "raid": { 00:07:41.025 "uuid": "b7610adf-fe36-4bd5-9675-46af2eb714cb", 00:07:41.025 "strip_size_kb": 64, 00:07:41.025 "state": "online", 00:07:41.025 "raid_level": "raid0", 00:07:41.025 "superblock": true, 00:07:41.025 "num_base_bdevs": 2, 00:07:41.025 "num_base_bdevs_discovered": 2, 00:07:41.025 "num_base_bdevs_operational": 2, 00:07:41.025 "base_bdevs_list": [ 00:07:41.025 { 00:07:41.025 "name": "pt1", 00:07:41.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.025 "is_configured": true, 00:07:41.025 "data_offset": 2048, 00:07:41.025 "data_size": 63488 00:07:41.025 }, 00:07:41.025 { 00:07:41.025 "name": "pt2", 00:07:41.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.025 "is_configured": true, 00:07:41.025 "data_offset": 2048, 00:07:41.025 "data_size": 63488 00:07:41.025 } 00:07:41.025 ] 00:07:41.025 } 00:07:41.025 } 00:07:41.025 }' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:41.025 pt2' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.025 [2024-11-18 03:56:37.594911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b7610adf-fe36-4bd5-9675-46af2eb714cb '!=' b7610adf-fe36-4bd5-9675-46af2eb714cb ']' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61192 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61192 ']' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61192 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.025 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61192 00:07:41.285 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.285 killing process with pid 61192 00:07:41.285 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.285 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61192' 00:07:41.285 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61192 00:07:41.285 [2024-11-18 03:56:37.668586] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.285 [2024-11-18 03:56:37.668718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.285 03:56:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61192 00:07:41.285 [2024-11-18 03:56:37.668777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.285 [2024-11-18 03:56:37.668801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:41.285 [2024-11-18 03:56:37.896798] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.666 03:56:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:42.666 00:07:42.666 real 0m4.567s 00:07:42.666 user 0m6.288s 00:07:42.666 sys 0m0.795s 00:07:42.666 03:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.666 ************************************ 00:07:42.666 END TEST raid_superblock_test 00:07:42.666 ************************************ 00:07:42.666 03:56:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.666 03:56:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:42.666 03:56:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:42.666 03:56:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.666 03:56:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.666 ************************************ 00:07:42.666 START TEST raid_read_error_test 00:07:42.666 ************************************ 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Nr16c5gLCe 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61403 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61403 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61403 ']' 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.666 03:56:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.666 [2024-11-18 03:56:39.267404] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:42.666 [2024-11-18 03:56:39.267540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61403 ] 00:07:42.925 [2024-11-18 03:56:39.446773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.184 [2024-11-18 03:56:39.579380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.185 [2024-11-18 03:56:39.808213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.185 [2024-11-18 03:56:39.808264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.444 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.444 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.444 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.444 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:43.444 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.444 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.705 BaseBdev1_malloc 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.705 true 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.705 [2024-11-18 03:56:40.131377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:43.705 [2024-11-18 03:56:40.131545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.705 [2024-11-18 03:56:40.131572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:43.705 [2024-11-18 03:56:40.131584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.705 [2024-11-18 03:56:40.134054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.705 [2024-11-18 03:56:40.134097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:43.705 BaseBdev1 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.705 BaseBdev2_malloc 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.705 true 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.705 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.705 [2024-11-18 03:56:40.204206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:43.706 [2024-11-18 03:56:40.204361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.706 [2024-11-18 03:56:40.204383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:43.706 [2024-11-18 03:56:40.204394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.706 [2024-11-18 03:56:40.206718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.706 [2024-11-18 03:56:40.206757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:43.706 BaseBdev2 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.706 [2024-11-18 03:56:40.216301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.706 [2024-11-18 03:56:40.218672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:43.706 [2024-11-18 03:56:40.218885] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:43.706 [2024-11-18 03:56:40.218904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:43.706 [2024-11-18 03:56:40.219159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:43.706 [2024-11-18 03:56:40.219370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:43.706 [2024-11-18 03:56:40.219385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:43.706 [2024-11-18 03:56:40.219547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.706 "name": "raid_bdev1", 00:07:43.706 "uuid": "342141e3-1a14-4627-83d4-a36600023d2b", 00:07:43.706 "strip_size_kb": 64, 00:07:43.706 "state": "online", 00:07:43.706 "raid_level": "raid0", 00:07:43.706 "superblock": true, 00:07:43.706 "num_base_bdevs": 2, 00:07:43.706 "num_base_bdevs_discovered": 2, 00:07:43.706 "num_base_bdevs_operational": 2, 00:07:43.706 "base_bdevs_list": [ 00:07:43.706 { 00:07:43.706 "name": "BaseBdev1", 00:07:43.706 "uuid": "633aa00f-1511-5a97-adb6-5e60d5183ec7", 00:07:43.706 "is_configured": true, 00:07:43.706 "data_offset": 2048, 00:07:43.706 "data_size": 63488 00:07:43.706 }, 00:07:43.706 { 00:07:43.706 "name": "BaseBdev2", 00:07:43.706 "uuid": "793b340e-e13b-55e2-afab-01f8caf484a0", 00:07:43.706 "is_configured": true, 00:07:43.706 "data_offset": 2048, 00:07:43.706 "data_size": 63488 00:07:43.706 } 00:07:43.706 ] 00:07:43.706 }' 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.706 03:56:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.276 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:44.276 03:56:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:44.276 [2024-11-18 03:56:40.756909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.217 "name": "raid_bdev1", 00:07:45.217 "uuid": "342141e3-1a14-4627-83d4-a36600023d2b", 00:07:45.217 "strip_size_kb": 64, 00:07:45.217 "state": "online", 00:07:45.217 "raid_level": "raid0", 00:07:45.217 "superblock": true, 00:07:45.217 "num_base_bdevs": 2, 00:07:45.217 "num_base_bdevs_discovered": 2, 00:07:45.217 "num_base_bdevs_operational": 2, 00:07:45.217 "base_bdevs_list": [ 00:07:45.217 { 00:07:45.217 "name": "BaseBdev1", 00:07:45.217 "uuid": "633aa00f-1511-5a97-adb6-5e60d5183ec7", 00:07:45.217 "is_configured": true, 00:07:45.217 "data_offset": 2048, 00:07:45.217 "data_size": 63488 00:07:45.217 }, 00:07:45.217 { 00:07:45.217 "name": "BaseBdev2", 00:07:45.217 "uuid": "793b340e-e13b-55e2-afab-01f8caf484a0", 00:07:45.217 "is_configured": true, 00:07:45.217 "data_offset": 2048, 00:07:45.217 "data_size": 63488 00:07:45.217 } 00:07:45.217 ] 00:07:45.217 }' 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.217 03:56:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.477 03:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.477 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.477 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.477 [2024-11-18 03:56:42.084999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.477 [2024-11-18 03:56:42.085156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.477 [2024-11-18 03:56:42.087735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.477 [2024-11-18 03:56:42.087835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.477 [2024-11-18 03:56:42.087892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.477 [2024-11-18 03:56:42.087939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:45.477 { 00:07:45.477 "results": [ 00:07:45.477 { 00:07:45.477 "job": "raid_bdev1", 00:07:45.477 "core_mask": "0x1", 00:07:45.477 "workload": "randrw", 00:07:45.477 "percentage": 50, 00:07:45.477 "status": "finished", 00:07:45.477 "queue_depth": 1, 00:07:45.477 "io_size": 131072, 00:07:45.477 "runtime": 1.328811, 00:07:45.477 "iops": 14514.479485795948, 00:07:45.477 "mibps": 1814.3099357244935, 00:07:45.477 "io_failed": 1, 00:07:45.477 "io_timeout": 0, 00:07:45.477 "avg_latency_us": 96.91469556381868, 00:07:45.477 "min_latency_us": 24.929257641921396, 00:07:45.477 "max_latency_us": 1445.2262008733624 00:07:45.477 } 00:07:45.477 ], 00:07:45.477 "core_count": 1 00:07:45.477 } 00:07:45.478 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.478 03:56:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61403 00:07:45.478 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61403 ']' 00:07:45.478 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61403 00:07:45.478 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:45.478 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.478 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61403 00:07:45.758 killing process with pid 61403 00:07:45.758 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.758 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.758 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61403' 00:07:45.758 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61403 00:07:45.758 [2024-11-18 03:56:42.128519] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.758 03:56:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61403 00:07:45.758 [2024-11-18 03:56:42.277304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.160 03:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Nr16c5gLCe 00:07:47.160 03:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:47.160 03:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:47.160 03:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:47.160 03:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:47.160 03:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.160 03:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:47.160 ************************************ 00:07:47.160 END TEST raid_read_error_test 00:07:47.160 ************************************ 00:07:47.160 03:56:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:47.160 00:07:47.160 real 0m4.398s 00:07:47.160 user 0m5.097s 00:07:47.160 sys 0m0.633s 00:07:47.160 03:56:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.160 03:56:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.160 03:56:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:47.160 03:56:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:47.160 03:56:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.160 03:56:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.160 ************************************ 00:07:47.160 START TEST raid_write_error_test 00:07:47.160 ************************************ 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AO4OiAJ3Rc 00:07:47.160 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61549 00:07:47.161 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:47.161 03:56:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61549 00:07:47.161 03:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61549 ']' 00:07:47.161 03:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.161 03:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.161 03:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.161 03:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.161 03:56:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.161 [2024-11-18 03:56:43.721187] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:47.161 [2024-11-18 03:56:43.721402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61549 ] 00:07:47.420 [2024-11-18 03:56:43.896328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.420 [2024-11-18 03:56:44.027335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.681 [2024-11-18 03:56:44.258305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.681 [2024-11-18 03:56:44.258379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.941 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.941 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:47.941 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:47.941 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:47.941 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.941 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.201 BaseBdev1_malloc 00:07:48.201 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.201 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:48.201 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.201 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.201 true 00:07:48.201 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 [2024-11-18 03:56:44.626049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:48.202 [2024-11-18 03:56:44.626117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.202 [2024-11-18 03:56:44.626137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:48.202 [2024-11-18 03:56:44.626148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.202 [2024-11-18 03:56:44.628512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.202 [2024-11-18 03:56:44.628553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:48.202 BaseBdev1 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 BaseBdev2_malloc 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 true 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 [2024-11-18 03:56:44.699385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:48.202 [2024-11-18 03:56:44.699451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.202 [2024-11-18 03:56:44.699468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:48.202 [2024-11-18 03:56:44.699479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.202 [2024-11-18 03:56:44.701759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.202 [2024-11-18 03:56:44.701798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:48.202 BaseBdev2 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 [2024-11-18 03:56:44.711428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.202 [2024-11-18 03:56:44.713469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:48.202 [2024-11-18 03:56:44.713662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:48.202 [2024-11-18 03:56:44.713679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:48.202 [2024-11-18 03:56:44.713917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:48.202 [2024-11-18 03:56:44.714090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:48.202 [2024-11-18 03:56:44.714170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:48.202 [2024-11-18 03:56:44.714332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.202 "name": "raid_bdev1", 00:07:48.202 "uuid": "5b469838-71af-4866-ad52-f397712a3d6e", 00:07:48.202 "strip_size_kb": 64, 00:07:48.202 "state": "online", 00:07:48.202 "raid_level": "raid0", 00:07:48.202 "superblock": true, 00:07:48.202 "num_base_bdevs": 2, 00:07:48.202 "num_base_bdevs_discovered": 2, 00:07:48.202 "num_base_bdevs_operational": 2, 00:07:48.202 "base_bdevs_list": [ 00:07:48.202 { 00:07:48.202 "name": "BaseBdev1", 00:07:48.202 "uuid": "95485183-cdb4-5f27-9d12-77ce8b595f46", 00:07:48.202 "is_configured": true, 00:07:48.202 "data_offset": 2048, 00:07:48.202 "data_size": 63488 00:07:48.202 }, 00:07:48.202 { 00:07:48.202 "name": "BaseBdev2", 00:07:48.202 "uuid": "29f572a5-035a-5a1f-ac6d-adf3b03ba345", 00:07:48.202 "is_configured": true, 00:07:48.202 "data_offset": 2048, 00:07:48.202 "data_size": 63488 00:07:48.202 } 00:07:48.202 ] 00:07:48.202 }' 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.202 03:56:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.772 03:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:48.772 03:56:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:48.772 [2024-11-18 03:56:45.216318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.712 "name": "raid_bdev1", 00:07:49.712 "uuid": "5b469838-71af-4866-ad52-f397712a3d6e", 00:07:49.712 "strip_size_kb": 64, 00:07:49.712 "state": "online", 00:07:49.712 "raid_level": "raid0", 00:07:49.712 "superblock": true, 00:07:49.712 "num_base_bdevs": 2, 00:07:49.712 "num_base_bdevs_discovered": 2, 00:07:49.712 "num_base_bdevs_operational": 2, 00:07:49.712 "base_bdevs_list": [ 00:07:49.712 { 00:07:49.712 "name": "BaseBdev1", 00:07:49.712 "uuid": "95485183-cdb4-5f27-9d12-77ce8b595f46", 00:07:49.712 "is_configured": true, 00:07:49.712 "data_offset": 2048, 00:07:49.712 "data_size": 63488 00:07:49.712 }, 00:07:49.712 { 00:07:49.712 "name": "BaseBdev2", 00:07:49.712 "uuid": "29f572a5-035a-5a1f-ac6d-adf3b03ba345", 00:07:49.712 "is_configured": true, 00:07:49.712 "data_offset": 2048, 00:07:49.712 "data_size": 63488 00:07:49.712 } 00:07:49.712 ] 00:07:49.712 }' 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.712 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.972 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:49.972 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.972 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.972 [2024-11-18 03:56:46.577022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:49.972 [2024-11-18 03:56:46.577080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.972 [2024-11-18 03:56:46.579774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.972 [2024-11-18 03:56:46.579819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.972 [2024-11-18 03:56:46.579968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.972 [2024-11-18 03:56:46.580020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:49.972 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.972 { 00:07:49.972 "results": [ 00:07:49.972 { 00:07:49.972 "job": "raid_bdev1", 00:07:49.972 "core_mask": "0x1", 00:07:49.972 "workload": "randrw", 00:07:49.972 "percentage": 50, 00:07:49.972 "status": "finished", 00:07:49.972 "queue_depth": 1, 00:07:49.972 "io_size": 131072, 00:07:49.972 "runtime": 1.361232, 00:07:49.972 "iops": 14428.840932331888, 00:07:49.972 "mibps": 1803.605116541486, 00:07:49.972 "io_failed": 1, 00:07:49.972 "io_timeout": 0, 00:07:49.972 "avg_latency_us": 97.66686180446588, 00:07:49.972 "min_latency_us": 24.929257641921396, 00:07:49.972 "max_latency_us": 1395.1441048034935 00:07:49.972 } 00:07:49.972 ], 00:07:49.972 "core_count": 1 00:07:49.972 } 00:07:49.972 03:56:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61549 00:07:49.972 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61549 ']' 00:07:49.973 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61549 00:07:49.973 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:49.973 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.973 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61549 00:07:50.232 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.232 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.232 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61549' 00:07:50.232 killing process with pid 61549 00:07:50.232 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61549 00:07:50.232 [2024-11-18 03:56:46.614442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.232 03:56:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61549 00:07:50.232 [2024-11-18 03:56:46.756806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.615 03:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AO4OiAJ3Rc 00:07:51.616 03:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:51.616 03:56:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:51.616 ************************************ 00:07:51.616 END TEST raid_write_error_test 00:07:51.616 ************************************ 00:07:51.616 03:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:51.616 03:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:51.616 03:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.616 03:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:51.616 03:56:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:51.616 00:07:51.616 real 0m4.391s 00:07:51.616 user 0m5.122s 00:07:51.616 sys 0m0.617s 00:07:51.616 03:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.616 03:56:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.616 03:56:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:51.616 03:56:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:51.616 03:56:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:51.616 03:56:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.616 03:56:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.616 ************************************ 00:07:51.616 START TEST raid_state_function_test 00:07:51.616 ************************************ 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:51.616 Process raid pid: 61687 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61687 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61687' 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61687 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61687 ']' 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.616 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.616 [2024-11-18 03:56:48.176107] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:51.616 [2024-11-18 03:56:48.176314] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.875 [2024-11-18 03:56:48.349235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.875 [2024-11-18 03:56:48.486410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.135 [2024-11-18 03:56:48.725442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.135 [2024-11-18 03:56:48.725567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.396 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.396 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:52.396 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.396 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.396 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.396 [2024-11-18 03:56:48.995158] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:52.396 [2024-11-18 03:56:48.995265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:52.396 [2024-11-18 03:56:48.995276] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.396 [2024-11-18 03:56:48.995287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.396 03:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.396 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:52.396 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.396 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.397 03:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.397 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.397 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.397 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.397 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.397 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.397 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.397 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.397 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.397 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.397 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.397 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.657 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.657 "name": "Existed_Raid", 00:07:52.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.657 "strip_size_kb": 64, 00:07:52.657 "state": "configuring", 00:07:52.657 "raid_level": "concat", 00:07:52.657 "superblock": false, 00:07:52.657 "num_base_bdevs": 2, 00:07:52.657 "num_base_bdevs_discovered": 0, 00:07:52.657 "num_base_bdevs_operational": 2, 00:07:52.657 "base_bdevs_list": [ 00:07:52.657 { 00:07:52.657 "name": "BaseBdev1", 00:07:52.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.657 "is_configured": false, 00:07:52.657 "data_offset": 0, 00:07:52.657 "data_size": 0 00:07:52.657 }, 00:07:52.657 { 00:07:52.657 "name": "BaseBdev2", 00:07:52.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.657 "is_configured": false, 00:07:52.657 "data_offset": 0, 00:07:52.657 "data_size": 0 00:07:52.657 } 00:07:52.657 ] 00:07:52.657 }' 00:07:52.657 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.657 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.917 [2024-11-18 03:56:49.418500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.917 [2024-11-18 03:56:49.418650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.917 [2024-11-18 03:56:49.430426] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:52.917 [2024-11-18 03:56:49.430515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:52.917 [2024-11-18 03:56:49.430543] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.917 [2024-11-18 03:56:49.430571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.917 [2024-11-18 03:56:49.486452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.917 BaseBdev1 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.917 [ 00:07:52.917 { 00:07:52.917 "name": "BaseBdev1", 00:07:52.917 "aliases": [ 00:07:52.917 "3c707549-e41f-41e1-8fdd-8f1663ec8bfc" 00:07:52.917 ], 00:07:52.917 "product_name": "Malloc disk", 00:07:52.917 "block_size": 512, 00:07:52.917 "num_blocks": 65536, 00:07:52.917 "uuid": "3c707549-e41f-41e1-8fdd-8f1663ec8bfc", 00:07:52.917 "assigned_rate_limits": { 00:07:52.917 "rw_ios_per_sec": 0, 00:07:52.917 "rw_mbytes_per_sec": 0, 00:07:52.917 "r_mbytes_per_sec": 0, 00:07:52.917 "w_mbytes_per_sec": 0 00:07:52.917 }, 00:07:52.917 "claimed": true, 00:07:52.917 "claim_type": "exclusive_write", 00:07:52.917 "zoned": false, 00:07:52.917 "supported_io_types": { 00:07:52.917 "read": true, 00:07:52.917 "write": true, 00:07:52.917 "unmap": true, 00:07:52.917 "flush": true, 00:07:52.917 "reset": true, 00:07:52.917 "nvme_admin": false, 00:07:52.917 "nvme_io": false, 00:07:52.917 "nvme_io_md": false, 00:07:52.917 "write_zeroes": true, 00:07:52.917 "zcopy": true, 00:07:52.917 "get_zone_info": false, 00:07:52.917 "zone_management": false, 00:07:52.917 "zone_append": false, 00:07:52.917 "compare": false, 00:07:52.917 "compare_and_write": false, 00:07:52.917 "abort": true, 00:07:52.917 "seek_hole": false, 00:07:52.917 "seek_data": false, 00:07:52.917 "copy": true, 00:07:52.917 "nvme_iov_md": false 00:07:52.917 }, 00:07:52.917 "memory_domains": [ 00:07:52.917 { 00:07:52.917 "dma_device_id": "system", 00:07:52.917 "dma_device_type": 1 00:07:52.917 }, 00:07:52.917 { 00:07:52.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.917 "dma_device_type": 2 00:07:52.917 } 00:07:52.917 ], 00:07:52.917 "driver_specific": {} 00:07:52.917 } 00:07:52.917 ] 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.917 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.177 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.177 "name": "Existed_Raid", 00:07:53.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.177 "strip_size_kb": 64, 00:07:53.177 "state": "configuring", 00:07:53.177 "raid_level": "concat", 00:07:53.177 "superblock": false, 00:07:53.177 "num_base_bdevs": 2, 00:07:53.177 "num_base_bdevs_discovered": 1, 00:07:53.177 "num_base_bdevs_operational": 2, 00:07:53.177 "base_bdevs_list": [ 00:07:53.177 { 00:07:53.177 "name": "BaseBdev1", 00:07:53.177 "uuid": "3c707549-e41f-41e1-8fdd-8f1663ec8bfc", 00:07:53.177 "is_configured": true, 00:07:53.177 "data_offset": 0, 00:07:53.177 "data_size": 65536 00:07:53.177 }, 00:07:53.177 { 00:07:53.177 "name": "BaseBdev2", 00:07:53.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.177 "is_configured": false, 00:07:53.177 "data_offset": 0, 00:07:53.177 "data_size": 0 00:07:53.177 } 00:07:53.177 ] 00:07:53.177 }' 00:07:53.177 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.177 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.443 [2024-11-18 03:56:49.981694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:53.443 [2024-11-18 03:56:49.981872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.443 [2024-11-18 03:56:49.993688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.443 [2024-11-18 03:56:49.995973] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.443 [2024-11-18 03:56:49.996022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.443 03:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.443 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.443 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.443 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.443 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.443 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.443 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.444 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.444 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.444 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.444 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.444 "name": "Existed_Raid", 00:07:53.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.444 "strip_size_kb": 64, 00:07:53.444 "state": "configuring", 00:07:53.444 "raid_level": "concat", 00:07:53.444 "superblock": false, 00:07:53.444 "num_base_bdevs": 2, 00:07:53.444 "num_base_bdevs_discovered": 1, 00:07:53.444 "num_base_bdevs_operational": 2, 00:07:53.444 "base_bdevs_list": [ 00:07:53.444 { 00:07:53.444 "name": "BaseBdev1", 00:07:53.444 "uuid": "3c707549-e41f-41e1-8fdd-8f1663ec8bfc", 00:07:53.444 "is_configured": true, 00:07:53.444 "data_offset": 0, 00:07:53.444 "data_size": 65536 00:07:53.444 }, 00:07:53.444 { 00:07:53.444 "name": "BaseBdev2", 00:07:53.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.444 "is_configured": false, 00:07:53.444 "data_offset": 0, 00:07:53.444 "data_size": 0 00:07:53.444 } 00:07:53.444 ] 00:07:53.444 }' 00:07:53.444 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.444 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 [2024-11-18 03:56:50.442199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.012 [2024-11-18 03:56:50.442334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:54.012 [2024-11-18 03:56:50.442361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:54.012 [2024-11-18 03:56:50.442692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:54.012 [2024-11-18 03:56:50.442920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:54.012 [2024-11-18 03:56:50.442970] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:54.012 [2024-11-18 03:56:50.443308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.012 BaseBdev2 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.012 [ 00:07:54.012 { 00:07:54.012 "name": "BaseBdev2", 00:07:54.012 "aliases": [ 00:07:54.012 "7b701795-32aa-4a9b-a94f-6f89dba29202" 00:07:54.012 ], 00:07:54.012 "product_name": "Malloc disk", 00:07:54.012 "block_size": 512, 00:07:54.012 "num_blocks": 65536, 00:07:54.012 "uuid": "7b701795-32aa-4a9b-a94f-6f89dba29202", 00:07:54.012 "assigned_rate_limits": { 00:07:54.012 "rw_ios_per_sec": 0, 00:07:54.012 "rw_mbytes_per_sec": 0, 00:07:54.012 "r_mbytes_per_sec": 0, 00:07:54.012 "w_mbytes_per_sec": 0 00:07:54.012 }, 00:07:54.012 "claimed": true, 00:07:54.012 "claim_type": "exclusive_write", 00:07:54.012 "zoned": false, 00:07:54.012 "supported_io_types": { 00:07:54.012 "read": true, 00:07:54.012 "write": true, 00:07:54.012 "unmap": true, 00:07:54.012 "flush": true, 00:07:54.012 "reset": true, 00:07:54.012 "nvme_admin": false, 00:07:54.012 "nvme_io": false, 00:07:54.012 "nvme_io_md": false, 00:07:54.012 "write_zeroes": true, 00:07:54.012 "zcopy": true, 00:07:54.012 "get_zone_info": false, 00:07:54.012 "zone_management": false, 00:07:54.012 "zone_append": false, 00:07:54.012 "compare": false, 00:07:54.012 "compare_and_write": false, 00:07:54.012 "abort": true, 00:07:54.012 "seek_hole": false, 00:07:54.012 "seek_data": false, 00:07:54.012 "copy": true, 00:07:54.012 "nvme_iov_md": false 00:07:54.012 }, 00:07:54.012 "memory_domains": [ 00:07:54.012 { 00:07:54.012 "dma_device_id": "system", 00:07:54.012 "dma_device_type": 1 00:07:54.012 }, 00:07:54.012 { 00:07:54.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.012 "dma_device_type": 2 00:07:54.012 } 00:07:54.012 ], 00:07:54.012 "driver_specific": {} 00:07:54.012 } 00:07:54.012 ] 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.012 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.013 "name": "Existed_Raid", 00:07:54.013 "uuid": "de85950e-5130-41e0-81bf-87ac3983f442", 00:07:54.013 "strip_size_kb": 64, 00:07:54.013 "state": "online", 00:07:54.013 "raid_level": "concat", 00:07:54.013 "superblock": false, 00:07:54.013 "num_base_bdevs": 2, 00:07:54.013 "num_base_bdevs_discovered": 2, 00:07:54.013 "num_base_bdevs_operational": 2, 00:07:54.013 "base_bdevs_list": [ 00:07:54.013 { 00:07:54.013 "name": "BaseBdev1", 00:07:54.013 "uuid": "3c707549-e41f-41e1-8fdd-8f1663ec8bfc", 00:07:54.013 "is_configured": true, 00:07:54.013 "data_offset": 0, 00:07:54.013 "data_size": 65536 00:07:54.013 }, 00:07:54.013 { 00:07:54.013 "name": "BaseBdev2", 00:07:54.013 "uuid": "7b701795-32aa-4a9b-a94f-6f89dba29202", 00:07:54.013 "is_configured": true, 00:07:54.013 "data_offset": 0, 00:07:54.013 "data_size": 65536 00:07:54.013 } 00:07:54.013 ] 00:07:54.013 }' 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.013 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.272 [2024-11-18 03:56:50.869704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.272 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.272 "name": "Existed_Raid", 00:07:54.272 "aliases": [ 00:07:54.272 "de85950e-5130-41e0-81bf-87ac3983f442" 00:07:54.272 ], 00:07:54.272 "product_name": "Raid Volume", 00:07:54.272 "block_size": 512, 00:07:54.272 "num_blocks": 131072, 00:07:54.272 "uuid": "de85950e-5130-41e0-81bf-87ac3983f442", 00:07:54.272 "assigned_rate_limits": { 00:07:54.272 "rw_ios_per_sec": 0, 00:07:54.272 "rw_mbytes_per_sec": 0, 00:07:54.272 "r_mbytes_per_sec": 0, 00:07:54.272 "w_mbytes_per_sec": 0 00:07:54.272 }, 00:07:54.272 "claimed": false, 00:07:54.272 "zoned": false, 00:07:54.272 "supported_io_types": { 00:07:54.272 "read": true, 00:07:54.272 "write": true, 00:07:54.272 "unmap": true, 00:07:54.272 "flush": true, 00:07:54.272 "reset": true, 00:07:54.272 "nvme_admin": false, 00:07:54.272 "nvme_io": false, 00:07:54.272 "nvme_io_md": false, 00:07:54.272 "write_zeroes": true, 00:07:54.272 "zcopy": false, 00:07:54.272 "get_zone_info": false, 00:07:54.272 "zone_management": false, 00:07:54.272 "zone_append": false, 00:07:54.272 "compare": false, 00:07:54.272 "compare_and_write": false, 00:07:54.272 "abort": false, 00:07:54.272 "seek_hole": false, 00:07:54.272 "seek_data": false, 00:07:54.272 "copy": false, 00:07:54.272 "nvme_iov_md": false 00:07:54.272 }, 00:07:54.272 "memory_domains": [ 00:07:54.272 { 00:07:54.272 "dma_device_id": "system", 00:07:54.272 "dma_device_type": 1 00:07:54.272 }, 00:07:54.272 { 00:07:54.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.272 "dma_device_type": 2 00:07:54.272 }, 00:07:54.272 { 00:07:54.272 "dma_device_id": "system", 00:07:54.272 "dma_device_type": 1 00:07:54.272 }, 00:07:54.272 { 00:07:54.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.272 "dma_device_type": 2 00:07:54.272 } 00:07:54.272 ], 00:07:54.272 "driver_specific": { 00:07:54.272 "raid": { 00:07:54.273 "uuid": "de85950e-5130-41e0-81bf-87ac3983f442", 00:07:54.273 "strip_size_kb": 64, 00:07:54.273 "state": "online", 00:07:54.273 "raid_level": "concat", 00:07:54.273 "superblock": false, 00:07:54.273 "num_base_bdevs": 2, 00:07:54.273 "num_base_bdevs_discovered": 2, 00:07:54.273 "num_base_bdevs_operational": 2, 00:07:54.273 "base_bdevs_list": [ 00:07:54.273 { 00:07:54.273 "name": "BaseBdev1", 00:07:54.273 "uuid": "3c707549-e41f-41e1-8fdd-8f1663ec8bfc", 00:07:54.273 "is_configured": true, 00:07:54.273 "data_offset": 0, 00:07:54.273 "data_size": 65536 00:07:54.273 }, 00:07:54.273 { 00:07:54.273 "name": "BaseBdev2", 00:07:54.273 "uuid": "7b701795-32aa-4a9b-a94f-6f89dba29202", 00:07:54.273 "is_configured": true, 00:07:54.273 "data_offset": 0, 00:07:54.273 "data_size": 65536 00:07:54.273 } 00:07:54.273 ] 00:07:54.273 } 00:07:54.273 } 00:07:54.273 }' 00:07:54.273 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.533 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:54.533 BaseBdev2' 00:07:54.533 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.533 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.533 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.533 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:54.533 03:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.533 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.533 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.533 03:56:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.533 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.533 [2024-11-18 03:56:51.065175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:54.533 [2024-11-18 03:56:51.065264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.533 [2024-11-18 03:56:51.065322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.793 "name": "Existed_Raid", 00:07:54.793 "uuid": "de85950e-5130-41e0-81bf-87ac3983f442", 00:07:54.793 "strip_size_kb": 64, 00:07:54.793 "state": "offline", 00:07:54.793 "raid_level": "concat", 00:07:54.793 "superblock": false, 00:07:54.793 "num_base_bdevs": 2, 00:07:54.793 "num_base_bdevs_discovered": 1, 00:07:54.793 "num_base_bdevs_operational": 1, 00:07:54.793 "base_bdevs_list": [ 00:07:54.793 { 00:07:54.793 "name": null, 00:07:54.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.793 "is_configured": false, 00:07:54.793 "data_offset": 0, 00:07:54.793 "data_size": 65536 00:07:54.793 }, 00:07:54.793 { 00:07:54.793 "name": "BaseBdev2", 00:07:54.793 "uuid": "7b701795-32aa-4a9b-a94f-6f89dba29202", 00:07:54.793 "is_configured": true, 00:07:54.793 "data_offset": 0, 00:07:54.793 "data_size": 65536 00:07:54.793 } 00:07:54.793 ] 00:07:54.793 }' 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.793 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.053 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.053 [2024-11-18 03:56:51.672955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:55.053 [2024-11-18 03:56:51.673112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61687 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61687 ']' 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61687 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61687 00:07:55.314 killing process with pid 61687 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61687' 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61687 00:07:55.314 [2024-11-18 03:56:51.873040] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.314 03:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61687 00:07:55.314 [2024-11-18 03:56:51.890155] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:56.697 00:07:56.697 real 0m4.983s 00:07:56.697 user 0m7.021s 00:07:56.697 sys 0m0.870s 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.697 ************************************ 00:07:56.697 END TEST raid_state_function_test 00:07:56.697 ************************************ 00:07:56.697 03:56:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:56.697 03:56:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:56.697 03:56:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.697 03:56:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:56.697 ************************************ 00:07:56.697 START TEST raid_state_function_test_sb 00:07:56.697 ************************************ 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61940 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61940' 00:07:56.697 Process raid pid: 61940 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61940 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61940 ']' 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.697 03:56:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.697 [2024-11-18 03:56:53.224133] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:56.697 [2024-11-18 03:56:53.224369] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.957 [2024-11-18 03:56:53.398997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.957 [2024-11-18 03:56:53.527304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.218 [2024-11-18 03:56:53.764748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.218 [2024-11-18 03:56:53.764790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.478 [2024-11-18 03:56:54.054463] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.478 [2024-11-18 03:56:54.054525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.478 [2024-11-18 03:56:54.054535] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.478 [2024-11-18 03:56:54.054544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.478 "name": "Existed_Raid", 00:07:57.478 "uuid": "f02f8c4e-8cad-4da2-bd68-84df1629c390", 00:07:57.478 "strip_size_kb": 64, 00:07:57.478 "state": "configuring", 00:07:57.478 "raid_level": "concat", 00:07:57.478 "superblock": true, 00:07:57.478 "num_base_bdevs": 2, 00:07:57.478 "num_base_bdevs_discovered": 0, 00:07:57.478 "num_base_bdevs_operational": 2, 00:07:57.478 "base_bdevs_list": [ 00:07:57.478 { 00:07:57.478 "name": "BaseBdev1", 00:07:57.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.478 "is_configured": false, 00:07:57.478 "data_offset": 0, 00:07:57.478 "data_size": 0 00:07:57.478 }, 00:07:57.478 { 00:07:57.478 "name": "BaseBdev2", 00:07:57.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.478 "is_configured": false, 00:07:57.478 "data_offset": 0, 00:07:57.478 "data_size": 0 00:07:57.478 } 00:07:57.478 ] 00:07:57.478 }' 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.478 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.049 [2024-11-18 03:56:54.465686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.049 [2024-11-18 03:56:54.465812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.049 [2024-11-18 03:56:54.473686] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:58.049 [2024-11-18 03:56:54.473766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:58.049 [2024-11-18 03:56:54.473792] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.049 [2024-11-18 03:56:54.473817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.049 [2024-11-18 03:56:54.524242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.049 BaseBdev1 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.049 [ 00:07:58.049 { 00:07:58.049 "name": "BaseBdev1", 00:07:58.049 "aliases": [ 00:07:58.049 "b1da8ea0-63ab-457c-8121-5f6452454c15" 00:07:58.049 ], 00:07:58.049 "product_name": "Malloc disk", 00:07:58.049 "block_size": 512, 00:07:58.049 "num_blocks": 65536, 00:07:58.049 "uuid": "b1da8ea0-63ab-457c-8121-5f6452454c15", 00:07:58.049 "assigned_rate_limits": { 00:07:58.049 "rw_ios_per_sec": 0, 00:07:58.049 "rw_mbytes_per_sec": 0, 00:07:58.049 "r_mbytes_per_sec": 0, 00:07:58.049 "w_mbytes_per_sec": 0 00:07:58.049 }, 00:07:58.049 "claimed": true, 00:07:58.049 "claim_type": "exclusive_write", 00:07:58.049 "zoned": false, 00:07:58.049 "supported_io_types": { 00:07:58.049 "read": true, 00:07:58.049 "write": true, 00:07:58.049 "unmap": true, 00:07:58.049 "flush": true, 00:07:58.049 "reset": true, 00:07:58.049 "nvme_admin": false, 00:07:58.049 "nvme_io": false, 00:07:58.049 "nvme_io_md": false, 00:07:58.049 "write_zeroes": true, 00:07:58.049 "zcopy": true, 00:07:58.049 "get_zone_info": false, 00:07:58.049 "zone_management": false, 00:07:58.049 "zone_append": false, 00:07:58.049 "compare": false, 00:07:58.049 "compare_and_write": false, 00:07:58.049 "abort": true, 00:07:58.049 "seek_hole": false, 00:07:58.049 "seek_data": false, 00:07:58.049 "copy": true, 00:07:58.049 "nvme_iov_md": false 00:07:58.049 }, 00:07:58.049 "memory_domains": [ 00:07:58.049 { 00:07:58.049 "dma_device_id": "system", 00:07:58.049 "dma_device_type": 1 00:07:58.049 }, 00:07:58.049 { 00:07:58.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.049 "dma_device_type": 2 00:07:58.049 } 00:07:58.049 ], 00:07:58.049 "driver_specific": {} 00:07:58.049 } 00:07:58.049 ] 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.049 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.049 "name": "Existed_Raid", 00:07:58.049 "uuid": "799bf81c-a463-4bdc-9261-cd61f48e331a", 00:07:58.049 "strip_size_kb": 64, 00:07:58.049 "state": "configuring", 00:07:58.049 "raid_level": "concat", 00:07:58.049 "superblock": true, 00:07:58.050 "num_base_bdevs": 2, 00:07:58.050 "num_base_bdevs_discovered": 1, 00:07:58.050 "num_base_bdevs_operational": 2, 00:07:58.050 "base_bdevs_list": [ 00:07:58.050 { 00:07:58.050 "name": "BaseBdev1", 00:07:58.050 "uuid": "b1da8ea0-63ab-457c-8121-5f6452454c15", 00:07:58.050 "is_configured": true, 00:07:58.050 "data_offset": 2048, 00:07:58.050 "data_size": 63488 00:07:58.050 }, 00:07:58.050 { 00:07:58.050 "name": "BaseBdev2", 00:07:58.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.050 "is_configured": false, 00:07:58.050 "data_offset": 0, 00:07:58.050 "data_size": 0 00:07:58.050 } 00:07:58.050 ] 00:07:58.050 }' 00:07:58.050 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.050 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.620 [2024-11-18 03:56:54.983546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.620 [2024-11-18 03:56:54.983709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.620 [2024-11-18 03:56:54.991561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.620 [2024-11-18 03:56:54.993741] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.620 [2024-11-18 03:56:54.993822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.620 03:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.620 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.620 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.620 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.620 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.620 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.620 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.620 "name": "Existed_Raid", 00:07:58.620 "uuid": "d30387e6-0faf-474c-98d6-70bb53862f88", 00:07:58.620 "strip_size_kb": 64, 00:07:58.620 "state": "configuring", 00:07:58.620 "raid_level": "concat", 00:07:58.620 "superblock": true, 00:07:58.620 "num_base_bdevs": 2, 00:07:58.620 "num_base_bdevs_discovered": 1, 00:07:58.620 "num_base_bdevs_operational": 2, 00:07:58.620 "base_bdevs_list": [ 00:07:58.620 { 00:07:58.620 "name": "BaseBdev1", 00:07:58.620 "uuid": "b1da8ea0-63ab-457c-8121-5f6452454c15", 00:07:58.620 "is_configured": true, 00:07:58.620 "data_offset": 2048, 00:07:58.620 "data_size": 63488 00:07:58.620 }, 00:07:58.620 { 00:07:58.620 "name": "BaseBdev2", 00:07:58.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.620 "is_configured": false, 00:07:58.620 "data_offset": 0, 00:07:58.620 "data_size": 0 00:07:58.620 } 00:07:58.620 ] 00:07:58.620 }' 00:07:58.620 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.620 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.880 [2024-11-18 03:56:55.463677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.880 BaseBdev2 00:07:58.880 [2024-11-18 03:56:55.464076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.880 [2024-11-18 03:56:55.464097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.880 [2024-11-18 03:56:55.464398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:58.880 [2024-11-18 03:56:55.464569] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.880 [2024-11-18 03:56:55.464583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:58.880 [2024-11-18 03:56:55.464750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.880 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.881 [ 00:07:58.881 { 00:07:58.881 "name": "BaseBdev2", 00:07:58.881 "aliases": [ 00:07:58.881 "e650205b-042b-4534-8568-c87d0540254a" 00:07:58.881 ], 00:07:58.881 "product_name": "Malloc disk", 00:07:58.881 "block_size": 512, 00:07:58.881 "num_blocks": 65536, 00:07:58.881 "uuid": "e650205b-042b-4534-8568-c87d0540254a", 00:07:58.881 "assigned_rate_limits": { 00:07:58.881 "rw_ios_per_sec": 0, 00:07:58.881 "rw_mbytes_per_sec": 0, 00:07:58.881 "r_mbytes_per_sec": 0, 00:07:58.881 "w_mbytes_per_sec": 0 00:07:58.881 }, 00:07:58.881 "claimed": true, 00:07:58.881 "claim_type": "exclusive_write", 00:07:58.881 "zoned": false, 00:07:58.881 "supported_io_types": { 00:07:58.881 "read": true, 00:07:58.881 "write": true, 00:07:58.881 "unmap": true, 00:07:58.881 "flush": true, 00:07:58.881 "reset": true, 00:07:58.881 "nvme_admin": false, 00:07:58.881 "nvme_io": false, 00:07:58.881 "nvme_io_md": false, 00:07:58.881 "write_zeroes": true, 00:07:58.881 "zcopy": true, 00:07:58.881 "get_zone_info": false, 00:07:58.881 "zone_management": false, 00:07:58.881 "zone_append": false, 00:07:58.881 "compare": false, 00:07:58.881 "compare_and_write": false, 00:07:58.881 "abort": true, 00:07:58.881 "seek_hole": false, 00:07:58.881 "seek_data": false, 00:07:58.881 "copy": true, 00:07:58.881 "nvme_iov_md": false 00:07:58.881 }, 00:07:58.881 "memory_domains": [ 00:07:58.881 { 00:07:58.881 "dma_device_id": "system", 00:07:58.881 "dma_device_type": 1 00:07:58.881 }, 00:07:58.881 { 00:07:58.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.881 "dma_device_type": 2 00:07:58.881 } 00:07:58.881 ], 00:07:58.881 "driver_specific": {} 00:07:58.881 } 00:07:58.881 ] 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.881 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.141 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.141 "name": "Existed_Raid", 00:07:59.141 "uuid": "d30387e6-0faf-474c-98d6-70bb53862f88", 00:07:59.141 "strip_size_kb": 64, 00:07:59.141 "state": "online", 00:07:59.141 "raid_level": "concat", 00:07:59.141 "superblock": true, 00:07:59.141 "num_base_bdevs": 2, 00:07:59.141 "num_base_bdevs_discovered": 2, 00:07:59.141 "num_base_bdevs_operational": 2, 00:07:59.141 "base_bdevs_list": [ 00:07:59.141 { 00:07:59.141 "name": "BaseBdev1", 00:07:59.141 "uuid": "b1da8ea0-63ab-457c-8121-5f6452454c15", 00:07:59.141 "is_configured": true, 00:07:59.141 "data_offset": 2048, 00:07:59.141 "data_size": 63488 00:07:59.141 }, 00:07:59.141 { 00:07:59.141 "name": "BaseBdev2", 00:07:59.141 "uuid": "e650205b-042b-4534-8568-c87d0540254a", 00:07:59.141 "is_configured": true, 00:07:59.141 "data_offset": 2048, 00:07:59.141 "data_size": 63488 00:07:59.141 } 00:07:59.141 ] 00:07:59.141 }' 00:07:59.142 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.142 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.401 [2024-11-18 03:56:55.919276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.401 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:59.401 "name": "Existed_Raid", 00:07:59.401 "aliases": [ 00:07:59.401 "d30387e6-0faf-474c-98d6-70bb53862f88" 00:07:59.401 ], 00:07:59.401 "product_name": "Raid Volume", 00:07:59.401 "block_size": 512, 00:07:59.401 "num_blocks": 126976, 00:07:59.401 "uuid": "d30387e6-0faf-474c-98d6-70bb53862f88", 00:07:59.401 "assigned_rate_limits": { 00:07:59.401 "rw_ios_per_sec": 0, 00:07:59.401 "rw_mbytes_per_sec": 0, 00:07:59.401 "r_mbytes_per_sec": 0, 00:07:59.401 "w_mbytes_per_sec": 0 00:07:59.401 }, 00:07:59.401 "claimed": false, 00:07:59.401 "zoned": false, 00:07:59.401 "supported_io_types": { 00:07:59.401 "read": true, 00:07:59.401 "write": true, 00:07:59.402 "unmap": true, 00:07:59.402 "flush": true, 00:07:59.402 "reset": true, 00:07:59.402 "nvme_admin": false, 00:07:59.402 "nvme_io": false, 00:07:59.402 "nvme_io_md": false, 00:07:59.402 "write_zeroes": true, 00:07:59.402 "zcopy": false, 00:07:59.402 "get_zone_info": false, 00:07:59.402 "zone_management": false, 00:07:59.402 "zone_append": false, 00:07:59.402 "compare": false, 00:07:59.402 "compare_and_write": false, 00:07:59.402 "abort": false, 00:07:59.402 "seek_hole": false, 00:07:59.402 "seek_data": false, 00:07:59.402 "copy": false, 00:07:59.402 "nvme_iov_md": false 00:07:59.402 }, 00:07:59.402 "memory_domains": [ 00:07:59.402 { 00:07:59.402 "dma_device_id": "system", 00:07:59.402 "dma_device_type": 1 00:07:59.402 }, 00:07:59.402 { 00:07:59.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.402 "dma_device_type": 2 00:07:59.402 }, 00:07:59.402 { 00:07:59.402 "dma_device_id": "system", 00:07:59.402 "dma_device_type": 1 00:07:59.402 }, 00:07:59.402 { 00:07:59.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.402 "dma_device_type": 2 00:07:59.402 } 00:07:59.402 ], 00:07:59.402 "driver_specific": { 00:07:59.402 "raid": { 00:07:59.402 "uuid": "d30387e6-0faf-474c-98d6-70bb53862f88", 00:07:59.402 "strip_size_kb": 64, 00:07:59.402 "state": "online", 00:07:59.402 "raid_level": "concat", 00:07:59.402 "superblock": true, 00:07:59.402 "num_base_bdevs": 2, 00:07:59.402 "num_base_bdevs_discovered": 2, 00:07:59.402 "num_base_bdevs_operational": 2, 00:07:59.402 "base_bdevs_list": [ 00:07:59.402 { 00:07:59.402 "name": "BaseBdev1", 00:07:59.402 "uuid": "b1da8ea0-63ab-457c-8121-5f6452454c15", 00:07:59.402 "is_configured": true, 00:07:59.402 "data_offset": 2048, 00:07:59.402 "data_size": 63488 00:07:59.402 }, 00:07:59.402 { 00:07:59.402 "name": "BaseBdev2", 00:07:59.402 "uuid": "e650205b-042b-4534-8568-c87d0540254a", 00:07:59.402 "is_configured": true, 00:07:59.402 "data_offset": 2048, 00:07:59.402 "data_size": 63488 00:07:59.402 } 00:07:59.402 ] 00:07:59.402 } 00:07:59.402 } 00:07:59.402 }' 00:07:59.402 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.402 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:59.402 BaseBdev2' 00:07:59.402 03:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.402 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:59.402 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 [2024-11-18 03:56:56.146764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:59.663 [2024-11-18 03:56:56.146815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.663 [2024-11-18 03:56:56.146892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.663 "name": "Existed_Raid", 00:07:59.663 "uuid": "d30387e6-0faf-474c-98d6-70bb53862f88", 00:07:59.663 "strip_size_kb": 64, 00:07:59.663 "state": "offline", 00:07:59.663 "raid_level": "concat", 00:07:59.663 "superblock": true, 00:07:59.663 "num_base_bdevs": 2, 00:07:59.663 "num_base_bdevs_discovered": 1, 00:07:59.663 "num_base_bdevs_operational": 1, 00:07:59.663 "base_bdevs_list": [ 00:07:59.663 { 00:07:59.663 "name": null, 00:07:59.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.663 "is_configured": false, 00:07:59.663 "data_offset": 0, 00:07:59.663 "data_size": 63488 00:07:59.663 }, 00:07:59.663 { 00:07:59.663 "name": "BaseBdev2", 00:07:59.663 "uuid": "e650205b-042b-4534-8568-c87d0540254a", 00:07:59.663 "is_configured": true, 00:07:59.663 "data_offset": 2048, 00:07:59.663 "data_size": 63488 00:07:59.663 } 00:07:59.663 ] 00:07:59.663 }' 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.663 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.232 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.232 [2024-11-18 03:56:56.771651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:00.232 [2024-11-18 03:56:56.771814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61940 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61940 ']' 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61940 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61940 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.492 killing process with pid 61940 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61940' 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61940 00:08:00.492 [2024-11-18 03:56:56.970560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.492 03:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61940 00:08:00.492 [2024-11-18 03:56:56.987320] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.895 03:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:01.895 00:08:01.895 real 0m5.041s 00:08:01.895 user 0m7.139s 00:08:01.895 sys 0m0.875s 00:08:01.895 03:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.895 03:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.895 ************************************ 00:08:01.895 END TEST raid_state_function_test_sb 00:08:01.895 ************************************ 00:08:01.895 03:56:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:01.895 03:56:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:01.895 03:56:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.895 03:56:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.895 ************************************ 00:08:01.895 START TEST raid_superblock_test 00:08:01.895 ************************************ 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:01.895 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62189 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62189 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62189 ']' 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.896 03:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.896 [2024-11-18 03:56:58.325798] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:01.896 [2024-11-18 03:56:58.326439] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62189 ] 00:08:01.896 [2024-11-18 03:56:58.499987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.155 [2024-11-18 03:56:58.639575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.415 [2024-11-18 03:56:58.881715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.415 [2024-11-18 03:56:58.881867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.687 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.688 malloc1 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.688 [2024-11-18 03:56:59.203436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:02.688 [2024-11-18 03:56:59.203517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.688 [2024-11-18 03:56:59.203544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:02.688 [2024-11-18 03:56:59.203553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.688 [2024-11-18 03:56:59.205947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.688 [2024-11-18 03:56:59.205985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:02.688 pt1 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.688 malloc2 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.688 [2024-11-18 03:56:59.265451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:02.688 [2024-11-18 03:56:59.265610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.688 [2024-11-18 03:56:59.265654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:02.688 [2024-11-18 03:56:59.265683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.688 [2024-11-18 03:56:59.268074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.688 [2024-11-18 03:56:59.268158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:02.688 pt2 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.688 [2024-11-18 03:56:59.277485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:02.688 [2024-11-18 03:56:59.279538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:02.688 [2024-11-18 03:56:59.279736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:02.688 [2024-11-18 03:56:59.279781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:02.688 [2024-11-18 03:56:59.280044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:02.688 [2024-11-18 03:56:59.280262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:02.688 [2024-11-18 03:56:59.280304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:02.688 [2024-11-18 03:56:59.280496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.688 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.949 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.949 "name": "raid_bdev1", 00:08:02.949 "uuid": "e66dfb3e-b4a2-4475-8ee1-785e22c055bf", 00:08:02.949 "strip_size_kb": 64, 00:08:02.949 "state": "online", 00:08:02.949 "raid_level": "concat", 00:08:02.949 "superblock": true, 00:08:02.949 "num_base_bdevs": 2, 00:08:02.949 "num_base_bdevs_discovered": 2, 00:08:02.949 "num_base_bdevs_operational": 2, 00:08:02.949 "base_bdevs_list": [ 00:08:02.949 { 00:08:02.949 "name": "pt1", 00:08:02.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.949 "is_configured": true, 00:08:02.949 "data_offset": 2048, 00:08:02.949 "data_size": 63488 00:08:02.949 }, 00:08:02.949 { 00:08:02.949 "name": "pt2", 00:08:02.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.949 "is_configured": true, 00:08:02.949 "data_offset": 2048, 00:08:02.949 "data_size": 63488 00:08:02.949 } 00:08:02.949 ] 00:08:02.949 }' 00:08:02.949 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.949 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.209 [2024-11-18 03:56:59.697083] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.209 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.209 "name": "raid_bdev1", 00:08:03.209 "aliases": [ 00:08:03.209 "e66dfb3e-b4a2-4475-8ee1-785e22c055bf" 00:08:03.209 ], 00:08:03.209 "product_name": "Raid Volume", 00:08:03.209 "block_size": 512, 00:08:03.209 "num_blocks": 126976, 00:08:03.209 "uuid": "e66dfb3e-b4a2-4475-8ee1-785e22c055bf", 00:08:03.209 "assigned_rate_limits": { 00:08:03.209 "rw_ios_per_sec": 0, 00:08:03.209 "rw_mbytes_per_sec": 0, 00:08:03.209 "r_mbytes_per_sec": 0, 00:08:03.209 "w_mbytes_per_sec": 0 00:08:03.209 }, 00:08:03.209 "claimed": false, 00:08:03.209 "zoned": false, 00:08:03.209 "supported_io_types": { 00:08:03.209 "read": true, 00:08:03.209 "write": true, 00:08:03.209 "unmap": true, 00:08:03.209 "flush": true, 00:08:03.209 "reset": true, 00:08:03.209 "nvme_admin": false, 00:08:03.209 "nvme_io": false, 00:08:03.209 "nvme_io_md": false, 00:08:03.209 "write_zeroes": true, 00:08:03.209 "zcopy": false, 00:08:03.209 "get_zone_info": false, 00:08:03.209 "zone_management": false, 00:08:03.209 "zone_append": false, 00:08:03.209 "compare": false, 00:08:03.209 "compare_and_write": false, 00:08:03.209 "abort": false, 00:08:03.209 "seek_hole": false, 00:08:03.209 "seek_data": false, 00:08:03.209 "copy": false, 00:08:03.209 "nvme_iov_md": false 00:08:03.209 }, 00:08:03.209 "memory_domains": [ 00:08:03.209 { 00:08:03.209 "dma_device_id": "system", 00:08:03.209 "dma_device_type": 1 00:08:03.209 }, 00:08:03.209 { 00:08:03.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.209 "dma_device_type": 2 00:08:03.209 }, 00:08:03.209 { 00:08:03.209 "dma_device_id": "system", 00:08:03.209 "dma_device_type": 1 00:08:03.209 }, 00:08:03.209 { 00:08:03.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.209 "dma_device_type": 2 00:08:03.209 } 00:08:03.209 ], 00:08:03.209 "driver_specific": { 00:08:03.209 "raid": { 00:08:03.209 "uuid": "e66dfb3e-b4a2-4475-8ee1-785e22c055bf", 00:08:03.209 "strip_size_kb": 64, 00:08:03.209 "state": "online", 00:08:03.210 "raid_level": "concat", 00:08:03.210 "superblock": true, 00:08:03.210 "num_base_bdevs": 2, 00:08:03.210 "num_base_bdevs_discovered": 2, 00:08:03.210 "num_base_bdevs_operational": 2, 00:08:03.210 "base_bdevs_list": [ 00:08:03.210 { 00:08:03.210 "name": "pt1", 00:08:03.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.210 "is_configured": true, 00:08:03.210 "data_offset": 2048, 00:08:03.210 "data_size": 63488 00:08:03.210 }, 00:08:03.210 { 00:08:03.210 "name": "pt2", 00:08:03.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.210 "is_configured": true, 00:08:03.210 "data_offset": 2048, 00:08:03.210 "data_size": 63488 00:08:03.210 } 00:08:03.210 ] 00:08:03.210 } 00:08:03.210 } 00:08:03.210 }' 00:08:03.210 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.210 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:03.210 pt2' 00:08:03.210 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.210 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.210 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.210 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:03.210 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.210 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.210 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.210 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:03.470 [2024-11-18 03:56:59.920574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e66dfb3e-b4a2-4475-8ee1-785e22c055bf 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e66dfb3e-b4a2-4475-8ee1-785e22c055bf ']' 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.470 [2024-11-18 03:56:59.968266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.470 [2024-11-18 03:56:59.968380] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.470 [2024-11-18 03:56:59.968517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.470 [2024-11-18 03:56:59.968593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.470 [2024-11-18 03:56:59.968640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:03.470 03:56:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.470 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.470 [2024-11-18 03:57:00.096052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:03.470 [2024-11-18 03:57:00.098200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:03.470 [2024-11-18 03:57:00.098275] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:03.470 [2024-11-18 03:57:00.098331] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:03.470 [2024-11-18 03:57:00.098346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.470 [2024-11-18 03:57:00.098356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:03.471 request: 00:08:03.471 { 00:08:03.471 "name": "raid_bdev1", 00:08:03.471 "raid_level": "concat", 00:08:03.471 "base_bdevs": [ 00:08:03.471 "malloc1", 00:08:03.471 "malloc2" 00:08:03.471 ], 00:08:03.471 "strip_size_kb": 64, 00:08:03.471 "superblock": false, 00:08:03.471 "method": "bdev_raid_create", 00:08:03.471 "req_id": 1 00:08:03.471 } 00:08:03.471 Got JSON-RPC error response 00:08:03.471 response: 00:08:03.471 { 00:08:03.471 "code": -17, 00:08:03.471 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:03.471 } 00:08:03.471 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:03.471 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:03.471 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:03.471 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:03.471 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:03.471 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:03.471 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.730 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.731 [2024-11-18 03:57:00.151991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:03.731 [2024-11-18 03:57:00.152151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.731 [2024-11-18 03:57:00.152193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:03.731 [2024-11-18 03:57:00.152225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.731 [2024-11-18 03:57:00.154738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.731 [2024-11-18 03:57:00.154809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:03.731 [2024-11-18 03:57:00.154944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:03.731 [2024-11-18 03:57:00.155037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:03.731 pt1 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.731 "name": "raid_bdev1", 00:08:03.731 "uuid": "e66dfb3e-b4a2-4475-8ee1-785e22c055bf", 00:08:03.731 "strip_size_kb": 64, 00:08:03.731 "state": "configuring", 00:08:03.731 "raid_level": "concat", 00:08:03.731 "superblock": true, 00:08:03.731 "num_base_bdevs": 2, 00:08:03.731 "num_base_bdevs_discovered": 1, 00:08:03.731 "num_base_bdevs_operational": 2, 00:08:03.731 "base_bdevs_list": [ 00:08:03.731 { 00:08:03.731 "name": "pt1", 00:08:03.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.731 "is_configured": true, 00:08:03.731 "data_offset": 2048, 00:08:03.731 "data_size": 63488 00:08:03.731 }, 00:08:03.731 { 00:08:03.731 "name": null, 00:08:03.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.731 "is_configured": false, 00:08:03.731 "data_offset": 2048, 00:08:03.731 "data_size": 63488 00:08:03.731 } 00:08:03.731 ] 00:08:03.731 }' 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.731 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.991 [2024-11-18 03:57:00.587378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:03.991 [2024-11-18 03:57:00.587570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.991 [2024-11-18 03:57:00.587601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:03.991 [2024-11-18 03:57:00.587614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.991 [2024-11-18 03:57:00.588214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.991 [2024-11-18 03:57:00.588239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:03.991 [2024-11-18 03:57:00.588343] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:03.991 [2024-11-18 03:57:00.588374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:03.991 [2024-11-18 03:57:00.588505] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:03.991 [2024-11-18 03:57:00.588517] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:03.991 [2024-11-18 03:57:00.588796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:03.991 [2024-11-18 03:57:00.588982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:03.991 [2024-11-18 03:57:00.588999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:03.991 [2024-11-18 03:57:00.589146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.991 pt2 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.991 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.251 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.251 "name": "raid_bdev1", 00:08:04.251 "uuid": "e66dfb3e-b4a2-4475-8ee1-785e22c055bf", 00:08:04.251 "strip_size_kb": 64, 00:08:04.251 "state": "online", 00:08:04.251 "raid_level": "concat", 00:08:04.251 "superblock": true, 00:08:04.251 "num_base_bdevs": 2, 00:08:04.251 "num_base_bdevs_discovered": 2, 00:08:04.251 "num_base_bdevs_operational": 2, 00:08:04.251 "base_bdevs_list": [ 00:08:04.251 { 00:08:04.251 "name": "pt1", 00:08:04.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.251 "is_configured": true, 00:08:04.251 "data_offset": 2048, 00:08:04.251 "data_size": 63488 00:08:04.251 }, 00:08:04.251 { 00:08:04.251 "name": "pt2", 00:08:04.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.251 "is_configured": true, 00:08:04.251 "data_offset": 2048, 00:08:04.251 "data_size": 63488 00:08:04.251 } 00:08:04.251 ] 00:08:04.251 }' 00:08:04.251 03:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.251 03:57:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.511 [2024-11-18 03:57:01.038827] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:04.511 "name": "raid_bdev1", 00:08:04.511 "aliases": [ 00:08:04.511 "e66dfb3e-b4a2-4475-8ee1-785e22c055bf" 00:08:04.511 ], 00:08:04.511 "product_name": "Raid Volume", 00:08:04.511 "block_size": 512, 00:08:04.511 "num_blocks": 126976, 00:08:04.511 "uuid": "e66dfb3e-b4a2-4475-8ee1-785e22c055bf", 00:08:04.511 "assigned_rate_limits": { 00:08:04.511 "rw_ios_per_sec": 0, 00:08:04.511 "rw_mbytes_per_sec": 0, 00:08:04.511 "r_mbytes_per_sec": 0, 00:08:04.511 "w_mbytes_per_sec": 0 00:08:04.511 }, 00:08:04.511 "claimed": false, 00:08:04.511 "zoned": false, 00:08:04.511 "supported_io_types": { 00:08:04.511 "read": true, 00:08:04.511 "write": true, 00:08:04.511 "unmap": true, 00:08:04.511 "flush": true, 00:08:04.511 "reset": true, 00:08:04.511 "nvme_admin": false, 00:08:04.511 "nvme_io": false, 00:08:04.511 "nvme_io_md": false, 00:08:04.511 "write_zeroes": true, 00:08:04.511 "zcopy": false, 00:08:04.511 "get_zone_info": false, 00:08:04.511 "zone_management": false, 00:08:04.511 "zone_append": false, 00:08:04.511 "compare": false, 00:08:04.511 "compare_and_write": false, 00:08:04.511 "abort": false, 00:08:04.511 "seek_hole": false, 00:08:04.511 "seek_data": false, 00:08:04.511 "copy": false, 00:08:04.511 "nvme_iov_md": false 00:08:04.511 }, 00:08:04.511 "memory_domains": [ 00:08:04.511 { 00:08:04.511 "dma_device_id": "system", 00:08:04.511 "dma_device_type": 1 00:08:04.511 }, 00:08:04.511 { 00:08:04.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.511 "dma_device_type": 2 00:08:04.511 }, 00:08:04.511 { 00:08:04.511 "dma_device_id": "system", 00:08:04.511 "dma_device_type": 1 00:08:04.511 }, 00:08:04.511 { 00:08:04.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.511 "dma_device_type": 2 00:08:04.511 } 00:08:04.511 ], 00:08:04.511 "driver_specific": { 00:08:04.511 "raid": { 00:08:04.511 "uuid": "e66dfb3e-b4a2-4475-8ee1-785e22c055bf", 00:08:04.511 "strip_size_kb": 64, 00:08:04.511 "state": "online", 00:08:04.511 "raid_level": "concat", 00:08:04.511 "superblock": true, 00:08:04.511 "num_base_bdevs": 2, 00:08:04.511 "num_base_bdevs_discovered": 2, 00:08:04.511 "num_base_bdevs_operational": 2, 00:08:04.511 "base_bdevs_list": [ 00:08:04.511 { 00:08:04.511 "name": "pt1", 00:08:04.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:04.511 "is_configured": true, 00:08:04.511 "data_offset": 2048, 00:08:04.511 "data_size": 63488 00:08:04.511 }, 00:08:04.511 { 00:08:04.511 "name": "pt2", 00:08:04.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:04.511 "is_configured": true, 00:08:04.511 "data_offset": 2048, 00:08:04.511 "data_size": 63488 00:08:04.511 } 00:08:04.511 ] 00:08:04.511 } 00:08:04.511 } 00:08:04.511 }' 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:04.511 pt2' 00:08:04.511 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.771 [2024-11-18 03:57:01.258336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e66dfb3e-b4a2-4475-8ee1-785e22c055bf '!=' e66dfb3e-b4a2-4475-8ee1-785e22c055bf ']' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62189 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62189 ']' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62189 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62189 00:08:04.771 killing process with pid 62189 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62189' 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62189 00:08:04.771 [2024-11-18 03:57:01.336893] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.771 [2024-11-18 03:57:01.337017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.771 [2024-11-18 03:57:01.337073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.771 [2024-11-18 03:57:01.337086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:04.771 03:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62189 00:08:05.031 [2024-11-18 03:57:01.563630] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.413 03:57:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:06.413 00:08:06.413 real 0m4.546s 00:08:06.413 user 0m6.226s 00:08:06.413 sys 0m0.788s 00:08:06.413 03:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.413 ************************************ 00:08:06.413 END TEST raid_superblock_test 00:08:06.413 ************************************ 00:08:06.413 03:57:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.413 03:57:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:06.413 03:57:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.413 03:57:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.413 03:57:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.413 ************************************ 00:08:06.413 START TEST raid_read_error_test 00:08:06.413 ************************************ 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ss97K5s5l3 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62398 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62398 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62398 ']' 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.413 03:57:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.413 [2024-11-18 03:57:02.962542] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:06.413 [2024-11-18 03:57:02.962776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62398 ] 00:08:06.672 [2024-11-18 03:57:03.141030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.672 [2024-11-18 03:57:03.281420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.932 [2024-11-18 03:57:03.513594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.932 [2024-11-18 03:57:03.513757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.190 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.190 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:07.190 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.190 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:07.190 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.190 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.450 BaseBdev1_malloc 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.450 true 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.450 [2024-11-18 03:57:03.868198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:07.450 [2024-11-18 03:57:03.868345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.450 [2024-11-18 03:57:03.868371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:07.450 [2024-11-18 03:57:03.868384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.450 [2024-11-18 03:57:03.870858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.450 [2024-11-18 03:57:03.870906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:07.450 BaseBdev1 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.450 BaseBdev2_malloc 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.450 true 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.450 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.450 [2024-11-18 03:57:03.943289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:07.450 [2024-11-18 03:57:03.943360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.450 [2024-11-18 03:57:03.943378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:07.450 [2024-11-18 03:57:03.943390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.450 [2024-11-18 03:57:03.945754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.450 [2024-11-18 03:57:03.945891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:07.450 BaseBdev2 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.451 [2024-11-18 03:57:03.955337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.451 [2024-11-18 03:57:03.957391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.451 [2024-11-18 03:57:03.957635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.451 [2024-11-18 03:57:03.957653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:07.451 [2024-11-18 03:57:03.957880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:07.451 [2024-11-18 03:57:03.958058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.451 [2024-11-18 03:57:03.958070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:07.451 [2024-11-18 03:57:03.958206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.451 03:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.451 03:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.451 "name": "raid_bdev1", 00:08:07.451 "uuid": "168ae080-a29c-4956-8d31-4a3260f93c88", 00:08:07.451 "strip_size_kb": 64, 00:08:07.451 "state": "online", 00:08:07.451 "raid_level": "concat", 00:08:07.451 "superblock": true, 00:08:07.451 "num_base_bdevs": 2, 00:08:07.451 "num_base_bdevs_discovered": 2, 00:08:07.451 "num_base_bdevs_operational": 2, 00:08:07.451 "base_bdevs_list": [ 00:08:07.451 { 00:08:07.451 "name": "BaseBdev1", 00:08:07.451 "uuid": "29cef789-8918-502d-b80e-7ee514501c92", 00:08:07.451 "is_configured": true, 00:08:07.451 "data_offset": 2048, 00:08:07.451 "data_size": 63488 00:08:07.451 }, 00:08:07.451 { 00:08:07.451 "name": "BaseBdev2", 00:08:07.451 "uuid": "569e0174-0205-542a-bb9d-e099969a2b98", 00:08:07.451 "is_configured": true, 00:08:07.451 "data_offset": 2048, 00:08:07.451 "data_size": 63488 00:08:07.451 } 00:08:07.451 ] 00:08:07.451 }' 00:08:07.451 03:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.451 03:57:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.030 03:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:08.030 03:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:08.030 [2024-11-18 03:57:04.443768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.975 "name": "raid_bdev1", 00:08:08.975 "uuid": "168ae080-a29c-4956-8d31-4a3260f93c88", 00:08:08.975 "strip_size_kb": 64, 00:08:08.975 "state": "online", 00:08:08.975 "raid_level": "concat", 00:08:08.975 "superblock": true, 00:08:08.975 "num_base_bdevs": 2, 00:08:08.975 "num_base_bdevs_discovered": 2, 00:08:08.975 "num_base_bdevs_operational": 2, 00:08:08.975 "base_bdevs_list": [ 00:08:08.975 { 00:08:08.975 "name": "BaseBdev1", 00:08:08.975 "uuid": "29cef789-8918-502d-b80e-7ee514501c92", 00:08:08.975 "is_configured": true, 00:08:08.975 "data_offset": 2048, 00:08:08.975 "data_size": 63488 00:08:08.975 }, 00:08:08.975 { 00:08:08.975 "name": "BaseBdev2", 00:08:08.975 "uuid": "569e0174-0205-542a-bb9d-e099969a2b98", 00:08:08.975 "is_configured": true, 00:08:08.975 "data_offset": 2048, 00:08:08.975 "data_size": 63488 00:08:08.975 } 00:08:08.975 ] 00:08:08.975 }' 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.975 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.235 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.235 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.235 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.235 [2024-11-18 03:57:05.836094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.235 [2024-11-18 03:57:05.836149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.235 [2024-11-18 03:57:05.838668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.235 [2024-11-18 03:57:05.838719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.235 [2024-11-18 03:57:05.838753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.235 [2024-11-18 03:57:05.838769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:09.235 { 00:08:09.235 "results": [ 00:08:09.235 { 00:08:09.235 "job": "raid_bdev1", 00:08:09.235 "core_mask": "0x1", 00:08:09.235 "workload": "randrw", 00:08:09.235 "percentage": 50, 00:08:09.235 "status": "finished", 00:08:09.235 "queue_depth": 1, 00:08:09.235 "io_size": 131072, 00:08:09.235 "runtime": 1.392841, 00:08:09.235 "iops": 15024.68695278212, 00:08:09.235 "mibps": 1878.085869097765, 00:08:09.235 "io_failed": 1, 00:08:09.235 "io_timeout": 0, 00:08:09.235 "avg_latency_us": 93.51763951764752, 00:08:09.235 "min_latency_us": 24.034934497816593, 00:08:09.235 "max_latency_us": 1337.907423580786 00:08:09.235 } 00:08:09.235 ], 00:08:09.235 "core_count": 1 00:08:09.235 } 00:08:09.235 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.235 03:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62398 00:08:09.235 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62398 ']' 00:08:09.235 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62398 00:08:09.235 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:09.235 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.235 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62398 00:08:09.496 killing process with pid 62398 00:08:09.496 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.496 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.496 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62398' 00:08:09.496 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62398 00:08:09.496 [2024-11-18 03:57:05.883879] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.496 03:57:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62398 00:08:09.496 [2024-11-18 03:57:06.033736] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.878 03:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ss97K5s5l3 00:08:10.878 03:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:10.878 03:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:10.878 03:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:10.878 03:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:10.878 ************************************ 00:08:10.878 END TEST raid_read_error_test 00:08:10.878 ************************************ 00:08:10.878 03:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.878 03:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:10.878 03:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:10.878 00:08:10.878 real 0m4.441s 00:08:10.878 user 0m5.180s 00:08:10.878 sys 0m0.614s 00:08:10.878 03:57:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.878 03:57:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.878 03:57:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:10.878 03:57:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:10.878 03:57:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.878 03:57:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.878 ************************************ 00:08:10.878 START TEST raid_write_error_test 00:08:10.878 ************************************ 00:08:10.878 03:57:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:10.878 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:10.878 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:10.878 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cl76Oqs6fK 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62545 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62545 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62545 ']' 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.879 03:57:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.879 [2024-11-18 03:57:07.490075] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:10.879 [2024-11-18 03:57:07.490369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62545 ] 00:08:11.138 [2024-11-18 03:57:07.676138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.399 [2024-11-18 03:57:07.815848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.659 [2024-11-18 03:57:08.052355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.659 [2024-11-18 03:57:08.052520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.919 BaseBdev1_malloc 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.919 true 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.919 [2024-11-18 03:57:08.399888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:11.919 [2024-11-18 03:57:08.399964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.919 [2024-11-18 03:57:08.399986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:11.919 [2024-11-18 03:57:08.399998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.919 [2024-11-18 03:57:08.402357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.919 [2024-11-18 03:57:08.402399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:11.919 BaseBdev1 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.919 BaseBdev2_malloc 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.919 true 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.919 [2024-11-18 03:57:08.466391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:11.919 [2024-11-18 03:57:08.466531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.919 [2024-11-18 03:57:08.466552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:11.919 [2024-11-18 03:57:08.466563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.919 [2024-11-18 03:57:08.468977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.919 [2024-11-18 03:57:08.469016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:11.919 BaseBdev2 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.919 [2024-11-18 03:57:08.478431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.919 [2024-11-18 03:57:08.480512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.919 [2024-11-18 03:57:08.480718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:11.919 [2024-11-18 03:57:08.480733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:11.919 [2024-11-18 03:57:08.480969] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:11.919 [2024-11-18 03:57:08.481172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:11.919 [2024-11-18 03:57:08.481192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:11.919 [2024-11-18 03:57:08.481340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.919 "name": "raid_bdev1", 00:08:11.919 "uuid": "94f2d95b-42f2-4cab-a915-925f67f632a1", 00:08:11.919 "strip_size_kb": 64, 00:08:11.919 "state": "online", 00:08:11.919 "raid_level": "concat", 00:08:11.919 "superblock": true, 00:08:11.919 "num_base_bdevs": 2, 00:08:11.919 "num_base_bdevs_discovered": 2, 00:08:11.919 "num_base_bdevs_operational": 2, 00:08:11.919 "base_bdevs_list": [ 00:08:11.919 { 00:08:11.919 "name": "BaseBdev1", 00:08:11.919 "uuid": "fc65c216-50cd-5460-9153-971814138fde", 00:08:11.919 "is_configured": true, 00:08:11.919 "data_offset": 2048, 00:08:11.919 "data_size": 63488 00:08:11.919 }, 00:08:11.919 { 00:08:11.919 "name": "BaseBdev2", 00:08:11.919 "uuid": "a698edb9-1be9-508b-85c9-86043e4adeeb", 00:08:11.919 "is_configured": true, 00:08:11.919 "data_offset": 2048, 00:08:11.919 "data_size": 63488 00:08:11.919 } 00:08:11.919 ] 00:08:11.919 }' 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.919 03:57:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.490 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:12.490 03:57:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:12.490 [2024-11-18 03:57:09.014855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.431 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.432 "name": "raid_bdev1", 00:08:13.432 "uuid": "94f2d95b-42f2-4cab-a915-925f67f632a1", 00:08:13.432 "strip_size_kb": 64, 00:08:13.432 "state": "online", 00:08:13.432 "raid_level": "concat", 00:08:13.432 "superblock": true, 00:08:13.432 "num_base_bdevs": 2, 00:08:13.432 "num_base_bdevs_discovered": 2, 00:08:13.432 "num_base_bdevs_operational": 2, 00:08:13.432 "base_bdevs_list": [ 00:08:13.432 { 00:08:13.432 "name": "BaseBdev1", 00:08:13.432 "uuid": "fc65c216-50cd-5460-9153-971814138fde", 00:08:13.432 "is_configured": true, 00:08:13.432 "data_offset": 2048, 00:08:13.432 "data_size": 63488 00:08:13.432 }, 00:08:13.432 { 00:08:13.432 "name": "BaseBdev2", 00:08:13.432 "uuid": "a698edb9-1be9-508b-85c9-86043e4adeeb", 00:08:13.432 "is_configured": true, 00:08:13.432 "data_offset": 2048, 00:08:13.432 "data_size": 63488 00:08:13.432 } 00:08:13.432 ] 00:08:13.432 }' 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.432 03:57:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.002 [2024-11-18 03:57:10.399441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:14.002 [2024-11-18 03:57:10.399498] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.002 [2024-11-18 03:57:10.402037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.002 [2024-11-18 03:57:10.402164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.002 [2024-11-18 03:57:10.402207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.002 [2024-11-18 03:57:10.402224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:14.002 { 00:08:14.002 "results": [ 00:08:14.002 { 00:08:14.002 "job": "raid_bdev1", 00:08:14.002 "core_mask": "0x1", 00:08:14.002 "workload": "randrw", 00:08:14.002 "percentage": 50, 00:08:14.002 "status": "finished", 00:08:14.002 "queue_depth": 1, 00:08:14.002 "io_size": 131072, 00:08:14.002 "runtime": 1.385152, 00:08:14.002 "iops": 14713.90865406829, 00:08:14.002 "mibps": 1839.2385817585362, 00:08:14.002 "io_failed": 1, 00:08:14.002 "io_timeout": 0, 00:08:14.002 "avg_latency_us": 95.5698576404645, 00:08:14.002 "min_latency_us": 24.593886462882097, 00:08:14.002 "max_latency_us": 1395.1441048034935 00:08:14.002 } 00:08:14.002 ], 00:08:14.002 "core_count": 1 00:08:14.002 } 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62545 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62545 ']' 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62545 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62545 00:08:14.002 killing process with pid 62545 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62545' 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62545 00:08:14.002 [2024-11-18 03:57:10.452690] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.002 03:57:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62545 00:08:14.002 [2024-11-18 03:57:10.599942] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.406 03:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cl76Oqs6fK 00:08:15.406 03:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:15.406 03:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:15.406 03:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:15.406 03:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:15.406 03:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.406 03:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:15.406 03:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:15.406 00:08:15.406 real 0m4.494s 00:08:15.406 user 0m5.285s 00:08:15.406 sys 0m0.661s 00:08:15.406 03:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.406 ************************************ 00:08:15.406 END TEST raid_write_error_test 00:08:15.406 ************************************ 00:08:15.406 03:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.406 03:57:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:15.406 03:57:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:15.406 03:57:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:15.406 03:57:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.406 03:57:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.406 ************************************ 00:08:15.406 START TEST raid_state_function_test 00:08:15.406 ************************************ 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:15.406 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62683 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62683' 00:08:15.407 Process raid pid: 62683 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62683 00:08:15.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62683 ']' 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.407 03:57:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.407 [2024-11-18 03:57:12.018503] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:15.407 [2024-11-18 03:57:12.019062] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.666 [2024-11-18 03:57:12.194014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.926 [2024-11-18 03:57:12.333904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.187 [2024-11-18 03:57:12.576206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.187 [2024-11-18 03:57:12.576371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.447 03:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.447 03:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:16.447 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.447 03:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.447 03:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.447 [2024-11-18 03:57:12.848855] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.447 [2024-11-18 03:57:12.848925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.447 [2024-11-18 03:57:12.848936] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.447 [2024-11-18 03:57:12.848947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.447 03:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.447 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:16.447 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.448 "name": "Existed_Raid", 00:08:16.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.448 "strip_size_kb": 0, 00:08:16.448 "state": "configuring", 00:08:16.448 "raid_level": "raid1", 00:08:16.448 "superblock": false, 00:08:16.448 "num_base_bdevs": 2, 00:08:16.448 "num_base_bdevs_discovered": 0, 00:08:16.448 "num_base_bdevs_operational": 2, 00:08:16.448 "base_bdevs_list": [ 00:08:16.448 { 00:08:16.448 "name": "BaseBdev1", 00:08:16.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.448 "is_configured": false, 00:08:16.448 "data_offset": 0, 00:08:16.448 "data_size": 0 00:08:16.448 }, 00:08:16.448 { 00:08:16.448 "name": "BaseBdev2", 00:08:16.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.448 "is_configured": false, 00:08:16.448 "data_offset": 0, 00:08:16.448 "data_size": 0 00:08:16.448 } 00:08:16.448 ] 00:08:16.448 }' 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.448 03:57:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.709 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.709 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.709 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.709 [2024-11-18 03:57:13.312104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.709 [2024-11-18 03:57:13.312247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:16.709 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.709 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.709 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.709 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.709 [2024-11-18 03:57:13.320037] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.709 [2024-11-18 03:57:13.320149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.709 [2024-11-18 03:57:13.320184] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.709 [2024-11-18 03:57:13.320219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.709 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.709 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:16.709 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.709 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.969 [2024-11-18 03:57:13.373695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.969 BaseBdev1 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.969 [ 00:08:16.969 { 00:08:16.969 "name": "BaseBdev1", 00:08:16.969 "aliases": [ 00:08:16.969 "7a14eaae-f53b-4b25-8076-44523c62b777" 00:08:16.969 ], 00:08:16.969 "product_name": "Malloc disk", 00:08:16.969 "block_size": 512, 00:08:16.969 "num_blocks": 65536, 00:08:16.969 "uuid": "7a14eaae-f53b-4b25-8076-44523c62b777", 00:08:16.969 "assigned_rate_limits": { 00:08:16.969 "rw_ios_per_sec": 0, 00:08:16.969 "rw_mbytes_per_sec": 0, 00:08:16.969 "r_mbytes_per_sec": 0, 00:08:16.969 "w_mbytes_per_sec": 0 00:08:16.969 }, 00:08:16.969 "claimed": true, 00:08:16.969 "claim_type": "exclusive_write", 00:08:16.969 "zoned": false, 00:08:16.969 "supported_io_types": { 00:08:16.969 "read": true, 00:08:16.969 "write": true, 00:08:16.969 "unmap": true, 00:08:16.969 "flush": true, 00:08:16.969 "reset": true, 00:08:16.969 "nvme_admin": false, 00:08:16.969 "nvme_io": false, 00:08:16.969 "nvme_io_md": false, 00:08:16.969 "write_zeroes": true, 00:08:16.969 "zcopy": true, 00:08:16.969 "get_zone_info": false, 00:08:16.969 "zone_management": false, 00:08:16.969 "zone_append": false, 00:08:16.969 "compare": false, 00:08:16.969 "compare_and_write": false, 00:08:16.969 "abort": true, 00:08:16.969 "seek_hole": false, 00:08:16.969 "seek_data": false, 00:08:16.969 "copy": true, 00:08:16.969 "nvme_iov_md": false 00:08:16.969 }, 00:08:16.969 "memory_domains": [ 00:08:16.969 { 00:08:16.969 "dma_device_id": "system", 00:08:16.969 "dma_device_type": 1 00:08:16.969 }, 00:08:16.969 { 00:08:16.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.969 "dma_device_type": 2 00:08:16.969 } 00:08:16.969 ], 00:08:16.969 "driver_specific": {} 00:08:16.969 } 00:08:16.969 ] 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.969 "name": "Existed_Raid", 00:08:16.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.969 "strip_size_kb": 0, 00:08:16.969 "state": "configuring", 00:08:16.969 "raid_level": "raid1", 00:08:16.969 "superblock": false, 00:08:16.969 "num_base_bdevs": 2, 00:08:16.969 "num_base_bdevs_discovered": 1, 00:08:16.969 "num_base_bdevs_operational": 2, 00:08:16.969 "base_bdevs_list": [ 00:08:16.969 { 00:08:16.969 "name": "BaseBdev1", 00:08:16.969 "uuid": "7a14eaae-f53b-4b25-8076-44523c62b777", 00:08:16.969 "is_configured": true, 00:08:16.969 "data_offset": 0, 00:08:16.969 "data_size": 65536 00:08:16.969 }, 00:08:16.969 { 00:08:16.969 "name": "BaseBdev2", 00:08:16.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.969 "is_configured": false, 00:08:16.969 "data_offset": 0, 00:08:16.969 "data_size": 0 00:08:16.969 } 00:08:16.969 ] 00:08:16.969 }' 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.969 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.230 [2024-11-18 03:57:13.832980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.230 [2024-11-18 03:57:13.833153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.230 [2024-11-18 03:57:13.844976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.230 [2024-11-18 03:57:13.847078] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.230 [2024-11-18 03:57:13.847185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.230 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.490 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.490 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.490 "name": "Existed_Raid", 00:08:17.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.490 "strip_size_kb": 0, 00:08:17.490 "state": "configuring", 00:08:17.490 "raid_level": "raid1", 00:08:17.490 "superblock": false, 00:08:17.490 "num_base_bdevs": 2, 00:08:17.490 "num_base_bdevs_discovered": 1, 00:08:17.490 "num_base_bdevs_operational": 2, 00:08:17.490 "base_bdevs_list": [ 00:08:17.490 { 00:08:17.490 "name": "BaseBdev1", 00:08:17.490 "uuid": "7a14eaae-f53b-4b25-8076-44523c62b777", 00:08:17.490 "is_configured": true, 00:08:17.490 "data_offset": 0, 00:08:17.490 "data_size": 65536 00:08:17.490 }, 00:08:17.490 { 00:08:17.490 "name": "BaseBdev2", 00:08:17.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.490 "is_configured": false, 00:08:17.490 "data_offset": 0, 00:08:17.490 "data_size": 0 00:08:17.490 } 00:08:17.490 ] 00:08:17.490 }' 00:08:17.490 03:57:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.490 03:57:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.750 [2024-11-18 03:57:14.335168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.750 [2024-11-18 03:57:14.335334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.750 [2024-11-18 03:57:14.335360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:17.750 [2024-11-18 03:57:14.335705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.750 [2024-11-18 03:57:14.335937] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.750 [2024-11-18 03:57:14.335988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:17.750 [2024-11-18 03:57:14.336321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.750 BaseBdev2 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.750 [ 00:08:17.750 { 00:08:17.750 "name": "BaseBdev2", 00:08:17.750 "aliases": [ 00:08:17.750 "9d657a21-1adb-451b-9c47-b9b714e184ea" 00:08:17.750 ], 00:08:17.750 "product_name": "Malloc disk", 00:08:17.750 "block_size": 512, 00:08:17.750 "num_blocks": 65536, 00:08:17.750 "uuid": "9d657a21-1adb-451b-9c47-b9b714e184ea", 00:08:17.750 "assigned_rate_limits": { 00:08:17.750 "rw_ios_per_sec": 0, 00:08:17.750 "rw_mbytes_per_sec": 0, 00:08:17.750 "r_mbytes_per_sec": 0, 00:08:17.750 "w_mbytes_per_sec": 0 00:08:17.750 }, 00:08:17.750 "claimed": true, 00:08:17.750 "claim_type": "exclusive_write", 00:08:17.750 "zoned": false, 00:08:17.750 "supported_io_types": { 00:08:17.750 "read": true, 00:08:17.750 "write": true, 00:08:17.750 "unmap": true, 00:08:17.750 "flush": true, 00:08:17.750 "reset": true, 00:08:17.750 "nvme_admin": false, 00:08:17.750 "nvme_io": false, 00:08:17.750 "nvme_io_md": false, 00:08:17.750 "write_zeroes": true, 00:08:17.750 "zcopy": true, 00:08:17.750 "get_zone_info": false, 00:08:17.750 "zone_management": false, 00:08:17.750 "zone_append": false, 00:08:17.750 "compare": false, 00:08:17.750 "compare_and_write": false, 00:08:17.750 "abort": true, 00:08:17.750 "seek_hole": false, 00:08:17.750 "seek_data": false, 00:08:17.750 "copy": true, 00:08:17.750 "nvme_iov_md": false 00:08:17.750 }, 00:08:17.750 "memory_domains": [ 00:08:17.750 { 00:08:17.750 "dma_device_id": "system", 00:08:17.750 "dma_device_type": 1 00:08:17.750 }, 00:08:17.750 { 00:08:17.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.750 "dma_device_type": 2 00:08:17.750 } 00:08:17.750 ], 00:08:17.750 "driver_specific": {} 00:08:17.750 } 00:08:17.750 ] 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.750 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.010 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.010 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.010 "name": "Existed_Raid", 00:08:18.010 "uuid": "d3b4bde8-f9de-4d16-a9e7-fe06f0353c53", 00:08:18.010 "strip_size_kb": 0, 00:08:18.010 "state": "online", 00:08:18.010 "raid_level": "raid1", 00:08:18.010 "superblock": false, 00:08:18.010 "num_base_bdevs": 2, 00:08:18.010 "num_base_bdevs_discovered": 2, 00:08:18.010 "num_base_bdevs_operational": 2, 00:08:18.010 "base_bdevs_list": [ 00:08:18.010 { 00:08:18.010 "name": "BaseBdev1", 00:08:18.010 "uuid": "7a14eaae-f53b-4b25-8076-44523c62b777", 00:08:18.010 "is_configured": true, 00:08:18.010 "data_offset": 0, 00:08:18.010 "data_size": 65536 00:08:18.010 }, 00:08:18.010 { 00:08:18.010 "name": "BaseBdev2", 00:08:18.010 "uuid": "9d657a21-1adb-451b-9c47-b9b714e184ea", 00:08:18.010 "is_configured": true, 00:08:18.010 "data_offset": 0, 00:08:18.010 "data_size": 65536 00:08:18.010 } 00:08:18.010 ] 00:08:18.010 }' 00:08:18.010 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.010 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.270 [2024-11-18 03:57:14.818726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.270 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.270 "name": "Existed_Raid", 00:08:18.270 "aliases": [ 00:08:18.270 "d3b4bde8-f9de-4d16-a9e7-fe06f0353c53" 00:08:18.270 ], 00:08:18.270 "product_name": "Raid Volume", 00:08:18.270 "block_size": 512, 00:08:18.270 "num_blocks": 65536, 00:08:18.270 "uuid": "d3b4bde8-f9de-4d16-a9e7-fe06f0353c53", 00:08:18.270 "assigned_rate_limits": { 00:08:18.270 "rw_ios_per_sec": 0, 00:08:18.270 "rw_mbytes_per_sec": 0, 00:08:18.270 "r_mbytes_per_sec": 0, 00:08:18.270 "w_mbytes_per_sec": 0 00:08:18.270 }, 00:08:18.270 "claimed": false, 00:08:18.270 "zoned": false, 00:08:18.270 "supported_io_types": { 00:08:18.270 "read": true, 00:08:18.270 "write": true, 00:08:18.270 "unmap": false, 00:08:18.270 "flush": false, 00:08:18.270 "reset": true, 00:08:18.270 "nvme_admin": false, 00:08:18.270 "nvme_io": false, 00:08:18.270 "nvme_io_md": false, 00:08:18.270 "write_zeroes": true, 00:08:18.270 "zcopy": false, 00:08:18.270 "get_zone_info": false, 00:08:18.270 "zone_management": false, 00:08:18.270 "zone_append": false, 00:08:18.270 "compare": false, 00:08:18.270 "compare_and_write": false, 00:08:18.270 "abort": false, 00:08:18.270 "seek_hole": false, 00:08:18.270 "seek_data": false, 00:08:18.270 "copy": false, 00:08:18.270 "nvme_iov_md": false 00:08:18.270 }, 00:08:18.270 "memory_domains": [ 00:08:18.270 { 00:08:18.270 "dma_device_id": "system", 00:08:18.270 "dma_device_type": 1 00:08:18.270 }, 00:08:18.270 { 00:08:18.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.270 "dma_device_type": 2 00:08:18.270 }, 00:08:18.270 { 00:08:18.270 "dma_device_id": "system", 00:08:18.270 "dma_device_type": 1 00:08:18.270 }, 00:08:18.270 { 00:08:18.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.270 "dma_device_type": 2 00:08:18.270 } 00:08:18.270 ], 00:08:18.270 "driver_specific": { 00:08:18.270 "raid": { 00:08:18.270 "uuid": "d3b4bde8-f9de-4d16-a9e7-fe06f0353c53", 00:08:18.270 "strip_size_kb": 0, 00:08:18.270 "state": "online", 00:08:18.270 "raid_level": "raid1", 00:08:18.270 "superblock": false, 00:08:18.270 "num_base_bdevs": 2, 00:08:18.270 "num_base_bdevs_discovered": 2, 00:08:18.270 "num_base_bdevs_operational": 2, 00:08:18.270 "base_bdevs_list": [ 00:08:18.270 { 00:08:18.270 "name": "BaseBdev1", 00:08:18.270 "uuid": "7a14eaae-f53b-4b25-8076-44523c62b777", 00:08:18.270 "is_configured": true, 00:08:18.270 "data_offset": 0, 00:08:18.270 "data_size": 65536 00:08:18.270 }, 00:08:18.270 { 00:08:18.270 "name": "BaseBdev2", 00:08:18.271 "uuid": "9d657a21-1adb-451b-9c47-b9b714e184ea", 00:08:18.271 "is_configured": true, 00:08:18.271 "data_offset": 0, 00:08:18.271 "data_size": 65536 00:08:18.271 } 00:08:18.271 ] 00:08:18.271 } 00:08:18.271 } 00:08:18.271 }' 00:08:18.271 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.271 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:18.271 BaseBdev2' 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.531 03:57:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.531 [2024-11-18 03:57:15.030058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.531 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.790 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.790 "name": "Existed_Raid", 00:08:18.790 "uuid": "d3b4bde8-f9de-4d16-a9e7-fe06f0353c53", 00:08:18.790 "strip_size_kb": 0, 00:08:18.790 "state": "online", 00:08:18.790 "raid_level": "raid1", 00:08:18.790 "superblock": false, 00:08:18.790 "num_base_bdevs": 2, 00:08:18.790 "num_base_bdevs_discovered": 1, 00:08:18.790 "num_base_bdevs_operational": 1, 00:08:18.790 "base_bdevs_list": [ 00:08:18.790 { 00:08:18.790 "name": null, 00:08:18.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.790 "is_configured": false, 00:08:18.790 "data_offset": 0, 00:08:18.790 "data_size": 65536 00:08:18.790 }, 00:08:18.790 { 00:08:18.790 "name": "BaseBdev2", 00:08:18.790 "uuid": "9d657a21-1adb-451b-9c47-b9b714e184ea", 00:08:18.790 "is_configured": true, 00:08:18.790 "data_offset": 0, 00:08:18.790 "data_size": 65536 00:08:18.790 } 00:08:18.790 ] 00:08:18.790 }' 00:08:18.790 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.790 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.050 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.050 [2024-11-18 03:57:15.634426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:19.050 [2024-11-18 03:57:15.634649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.310 [2024-11-18 03:57:15.740770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.310 [2024-11-18 03:57:15.740864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.310 [2024-11-18 03:57:15.740880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62683 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62683 ']' 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62683 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62683 00:08:19.310 killing process with pid 62683 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62683' 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62683 00:08:19.310 [2024-11-18 03:57:15.834047] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.310 03:57:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62683 00:08:19.310 [2024-11-18 03:57:15.851784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.692 03:57:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:20.692 ************************************ 00:08:20.692 END TEST raid_state_function_test 00:08:20.692 ************************************ 00:08:20.692 00:08:20.692 real 0m5.108s 00:08:20.692 user 0m7.245s 00:08:20.692 sys 0m0.897s 00:08:20.692 03:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.692 03:57:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.692 03:57:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:20.692 03:57:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:20.692 03:57:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.692 03:57:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.692 ************************************ 00:08:20.692 START TEST raid_state_function_test_sb 00:08:20.692 ************************************ 00:08:20.692 03:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:20.692 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:20.692 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:20.692 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:20.692 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:20.692 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:20.692 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.692 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62936 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62936' 00:08:20.693 Process raid pid: 62936 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62936 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62936 ']' 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.693 03:57:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.693 [2024-11-18 03:57:17.195039] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:20.693 [2024-11-18 03:57:17.195151] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.953 [2024-11-18 03:57:17.370026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.953 [2024-11-18 03:57:17.501467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.213 [2024-11-18 03:57:17.740325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.213 [2024-11-18 03:57:17.740372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.473 [2024-11-18 03:57:18.025896] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.473 [2024-11-18 03:57:18.026058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.473 [2024-11-18 03:57:18.026073] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.473 [2024-11-18 03:57:18.026083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.473 "name": "Existed_Raid", 00:08:21.473 "uuid": "27905d89-9cbe-478b-9b50-02db2366eca6", 00:08:21.473 "strip_size_kb": 0, 00:08:21.473 "state": "configuring", 00:08:21.473 "raid_level": "raid1", 00:08:21.473 "superblock": true, 00:08:21.473 "num_base_bdevs": 2, 00:08:21.473 "num_base_bdevs_discovered": 0, 00:08:21.473 "num_base_bdevs_operational": 2, 00:08:21.473 "base_bdevs_list": [ 00:08:21.473 { 00:08:21.473 "name": "BaseBdev1", 00:08:21.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.473 "is_configured": false, 00:08:21.473 "data_offset": 0, 00:08:21.473 "data_size": 0 00:08:21.473 }, 00:08:21.473 { 00:08:21.473 "name": "BaseBdev2", 00:08:21.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.473 "is_configured": false, 00:08:21.473 "data_offset": 0, 00:08:21.473 "data_size": 0 00:08:21.473 } 00:08:21.473 ] 00:08:21.473 }' 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.473 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.044 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.044 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.044 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.044 [2024-11-18 03:57:18.437158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.045 [2024-11-18 03:57:18.437293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.045 [2024-11-18 03:57:18.449069] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.045 [2024-11-18 03:57:18.449148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.045 [2024-11-18 03:57:18.449173] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.045 [2024-11-18 03:57:18.449198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.045 [2024-11-18 03:57:18.502640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.045 BaseBdev1 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.045 [ 00:08:22.045 { 00:08:22.045 "name": "BaseBdev1", 00:08:22.045 "aliases": [ 00:08:22.045 "ae2df944-77fb-4813-8c4c-2399a6e91639" 00:08:22.045 ], 00:08:22.045 "product_name": "Malloc disk", 00:08:22.045 "block_size": 512, 00:08:22.045 "num_blocks": 65536, 00:08:22.045 "uuid": "ae2df944-77fb-4813-8c4c-2399a6e91639", 00:08:22.045 "assigned_rate_limits": { 00:08:22.045 "rw_ios_per_sec": 0, 00:08:22.045 "rw_mbytes_per_sec": 0, 00:08:22.045 "r_mbytes_per_sec": 0, 00:08:22.045 "w_mbytes_per_sec": 0 00:08:22.045 }, 00:08:22.045 "claimed": true, 00:08:22.045 "claim_type": "exclusive_write", 00:08:22.045 "zoned": false, 00:08:22.045 "supported_io_types": { 00:08:22.045 "read": true, 00:08:22.045 "write": true, 00:08:22.045 "unmap": true, 00:08:22.045 "flush": true, 00:08:22.045 "reset": true, 00:08:22.045 "nvme_admin": false, 00:08:22.045 "nvme_io": false, 00:08:22.045 "nvme_io_md": false, 00:08:22.045 "write_zeroes": true, 00:08:22.045 "zcopy": true, 00:08:22.045 "get_zone_info": false, 00:08:22.045 "zone_management": false, 00:08:22.045 "zone_append": false, 00:08:22.045 "compare": false, 00:08:22.045 "compare_and_write": false, 00:08:22.045 "abort": true, 00:08:22.045 "seek_hole": false, 00:08:22.045 "seek_data": false, 00:08:22.045 "copy": true, 00:08:22.045 "nvme_iov_md": false 00:08:22.045 }, 00:08:22.045 "memory_domains": [ 00:08:22.045 { 00:08:22.045 "dma_device_id": "system", 00:08:22.045 "dma_device_type": 1 00:08:22.045 }, 00:08:22.045 { 00:08:22.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.045 "dma_device_type": 2 00:08:22.045 } 00:08:22.045 ], 00:08:22.045 "driver_specific": {} 00:08:22.045 } 00:08:22.045 ] 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.045 "name": "Existed_Raid", 00:08:22.045 "uuid": "2de95520-b963-4ea5-8ebd-f6e596f829d1", 00:08:22.045 "strip_size_kb": 0, 00:08:22.045 "state": "configuring", 00:08:22.045 "raid_level": "raid1", 00:08:22.045 "superblock": true, 00:08:22.045 "num_base_bdevs": 2, 00:08:22.045 "num_base_bdevs_discovered": 1, 00:08:22.045 "num_base_bdevs_operational": 2, 00:08:22.045 "base_bdevs_list": [ 00:08:22.045 { 00:08:22.045 "name": "BaseBdev1", 00:08:22.045 "uuid": "ae2df944-77fb-4813-8c4c-2399a6e91639", 00:08:22.045 "is_configured": true, 00:08:22.045 "data_offset": 2048, 00:08:22.045 "data_size": 63488 00:08:22.045 }, 00:08:22.045 { 00:08:22.045 "name": "BaseBdev2", 00:08:22.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.045 "is_configured": false, 00:08:22.045 "data_offset": 0, 00:08:22.045 "data_size": 0 00:08:22.045 } 00:08:22.045 ] 00:08:22.045 }' 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.045 03:57:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.621 [2024-11-18 03:57:19.021816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.621 [2024-11-18 03:57:19.021910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.621 [2024-11-18 03:57:19.033857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.621 [2024-11-18 03:57:19.036023] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.621 [2024-11-18 03:57:19.036147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.621 "name": "Existed_Raid", 00:08:22.621 "uuid": "70ec3024-4d63-44da-8546-0c765f2768ef", 00:08:22.621 "strip_size_kb": 0, 00:08:22.621 "state": "configuring", 00:08:22.621 "raid_level": "raid1", 00:08:22.621 "superblock": true, 00:08:22.621 "num_base_bdevs": 2, 00:08:22.621 "num_base_bdevs_discovered": 1, 00:08:22.621 "num_base_bdevs_operational": 2, 00:08:22.621 "base_bdevs_list": [ 00:08:22.621 { 00:08:22.621 "name": "BaseBdev1", 00:08:22.621 "uuid": "ae2df944-77fb-4813-8c4c-2399a6e91639", 00:08:22.621 "is_configured": true, 00:08:22.621 "data_offset": 2048, 00:08:22.621 "data_size": 63488 00:08:22.621 }, 00:08:22.621 { 00:08:22.621 "name": "BaseBdev2", 00:08:22.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.621 "is_configured": false, 00:08:22.621 "data_offset": 0, 00:08:22.621 "data_size": 0 00:08:22.621 } 00:08:22.621 ] 00:08:22.621 }' 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.621 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.919 [2024-11-18 03:57:19.518803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.919 [2024-11-18 03:57:19.519228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.919 [2024-11-18 03:57:19.519309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:22.919 [2024-11-18 03:57:19.519627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:22.919 [2024-11-18 03:57:19.519839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.919 [2024-11-18 03:57:19.519886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:22.919 BaseBdev2 00:08:22.919 [2024-11-18 03:57:19.520085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.919 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.919 [ 00:08:22.919 { 00:08:22.919 "name": "BaseBdev2", 00:08:22.919 "aliases": [ 00:08:22.919 "074f716a-d343-4821-899a-b1e8310911f7" 00:08:22.919 ], 00:08:22.919 "product_name": "Malloc disk", 00:08:22.919 "block_size": 512, 00:08:22.919 "num_blocks": 65536, 00:08:22.919 "uuid": "074f716a-d343-4821-899a-b1e8310911f7", 00:08:22.919 "assigned_rate_limits": { 00:08:22.919 "rw_ios_per_sec": 0, 00:08:22.919 "rw_mbytes_per_sec": 0, 00:08:22.919 "r_mbytes_per_sec": 0, 00:08:22.919 "w_mbytes_per_sec": 0 00:08:22.919 }, 00:08:22.919 "claimed": true, 00:08:22.919 "claim_type": "exclusive_write", 00:08:22.919 "zoned": false, 00:08:22.919 "supported_io_types": { 00:08:22.919 "read": true, 00:08:22.919 "write": true, 00:08:22.919 "unmap": true, 00:08:22.919 "flush": true, 00:08:22.919 "reset": true, 00:08:22.919 "nvme_admin": false, 00:08:22.919 "nvme_io": false, 00:08:22.919 "nvme_io_md": false, 00:08:22.919 "write_zeroes": true, 00:08:22.919 "zcopy": true, 00:08:22.919 "get_zone_info": false, 00:08:22.919 "zone_management": false, 00:08:22.919 "zone_append": false, 00:08:22.919 "compare": false, 00:08:22.919 "compare_and_write": false, 00:08:22.919 "abort": true, 00:08:22.919 "seek_hole": false, 00:08:22.919 "seek_data": false, 00:08:22.919 "copy": true, 00:08:22.919 "nvme_iov_md": false 00:08:22.919 }, 00:08:22.919 "memory_domains": [ 00:08:22.919 { 00:08:22.919 "dma_device_id": "system", 00:08:22.919 "dma_device_type": 1 00:08:22.919 }, 00:08:22.919 { 00:08:22.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.179 "dma_device_type": 2 00:08:23.179 } 00:08:23.179 ], 00:08:23.179 "driver_specific": {} 00:08:23.179 } 00:08:23.179 ] 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.179 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.180 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.180 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.180 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.180 "name": "Existed_Raid", 00:08:23.180 "uuid": "70ec3024-4d63-44da-8546-0c765f2768ef", 00:08:23.180 "strip_size_kb": 0, 00:08:23.180 "state": "online", 00:08:23.180 "raid_level": "raid1", 00:08:23.180 "superblock": true, 00:08:23.180 "num_base_bdevs": 2, 00:08:23.180 "num_base_bdevs_discovered": 2, 00:08:23.180 "num_base_bdevs_operational": 2, 00:08:23.180 "base_bdevs_list": [ 00:08:23.180 { 00:08:23.180 "name": "BaseBdev1", 00:08:23.180 "uuid": "ae2df944-77fb-4813-8c4c-2399a6e91639", 00:08:23.180 "is_configured": true, 00:08:23.180 "data_offset": 2048, 00:08:23.180 "data_size": 63488 00:08:23.180 }, 00:08:23.180 { 00:08:23.180 "name": "BaseBdev2", 00:08:23.180 "uuid": "074f716a-d343-4821-899a-b1e8310911f7", 00:08:23.180 "is_configured": true, 00:08:23.180 "data_offset": 2048, 00:08:23.180 "data_size": 63488 00:08:23.180 } 00:08:23.180 ] 00:08:23.180 }' 00:08:23.180 03:57:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.180 03:57:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.440 [2024-11-18 03:57:20.026283] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.440 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.440 "name": "Existed_Raid", 00:08:23.440 "aliases": [ 00:08:23.440 "70ec3024-4d63-44da-8546-0c765f2768ef" 00:08:23.440 ], 00:08:23.440 "product_name": "Raid Volume", 00:08:23.440 "block_size": 512, 00:08:23.440 "num_blocks": 63488, 00:08:23.440 "uuid": "70ec3024-4d63-44da-8546-0c765f2768ef", 00:08:23.440 "assigned_rate_limits": { 00:08:23.440 "rw_ios_per_sec": 0, 00:08:23.440 "rw_mbytes_per_sec": 0, 00:08:23.440 "r_mbytes_per_sec": 0, 00:08:23.441 "w_mbytes_per_sec": 0 00:08:23.441 }, 00:08:23.441 "claimed": false, 00:08:23.441 "zoned": false, 00:08:23.441 "supported_io_types": { 00:08:23.441 "read": true, 00:08:23.441 "write": true, 00:08:23.441 "unmap": false, 00:08:23.441 "flush": false, 00:08:23.441 "reset": true, 00:08:23.441 "nvme_admin": false, 00:08:23.441 "nvme_io": false, 00:08:23.441 "nvme_io_md": false, 00:08:23.441 "write_zeroes": true, 00:08:23.441 "zcopy": false, 00:08:23.441 "get_zone_info": false, 00:08:23.441 "zone_management": false, 00:08:23.441 "zone_append": false, 00:08:23.441 "compare": false, 00:08:23.441 "compare_and_write": false, 00:08:23.441 "abort": false, 00:08:23.441 "seek_hole": false, 00:08:23.441 "seek_data": false, 00:08:23.441 "copy": false, 00:08:23.441 "nvme_iov_md": false 00:08:23.441 }, 00:08:23.441 "memory_domains": [ 00:08:23.441 { 00:08:23.441 "dma_device_id": "system", 00:08:23.441 "dma_device_type": 1 00:08:23.441 }, 00:08:23.441 { 00:08:23.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.441 "dma_device_type": 2 00:08:23.441 }, 00:08:23.441 { 00:08:23.441 "dma_device_id": "system", 00:08:23.441 "dma_device_type": 1 00:08:23.441 }, 00:08:23.441 { 00:08:23.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.441 "dma_device_type": 2 00:08:23.441 } 00:08:23.441 ], 00:08:23.441 "driver_specific": { 00:08:23.441 "raid": { 00:08:23.441 "uuid": "70ec3024-4d63-44da-8546-0c765f2768ef", 00:08:23.441 "strip_size_kb": 0, 00:08:23.441 "state": "online", 00:08:23.441 "raid_level": "raid1", 00:08:23.441 "superblock": true, 00:08:23.441 "num_base_bdevs": 2, 00:08:23.441 "num_base_bdevs_discovered": 2, 00:08:23.441 "num_base_bdevs_operational": 2, 00:08:23.441 "base_bdevs_list": [ 00:08:23.441 { 00:08:23.441 "name": "BaseBdev1", 00:08:23.441 "uuid": "ae2df944-77fb-4813-8c4c-2399a6e91639", 00:08:23.441 "is_configured": true, 00:08:23.441 "data_offset": 2048, 00:08:23.441 "data_size": 63488 00:08:23.441 }, 00:08:23.441 { 00:08:23.441 "name": "BaseBdev2", 00:08:23.441 "uuid": "074f716a-d343-4821-899a-b1e8310911f7", 00:08:23.441 "is_configured": true, 00:08:23.441 "data_offset": 2048, 00:08:23.441 "data_size": 63488 00:08:23.441 } 00:08:23.441 ] 00:08:23.441 } 00:08:23.441 } 00:08:23.441 }' 00:08:23.441 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:23.701 BaseBdev2' 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.701 [2024-11-18 03:57:20.229654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:23.701 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.702 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.961 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.961 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.961 "name": "Existed_Raid", 00:08:23.961 "uuid": "70ec3024-4d63-44da-8546-0c765f2768ef", 00:08:23.961 "strip_size_kb": 0, 00:08:23.961 "state": "online", 00:08:23.961 "raid_level": "raid1", 00:08:23.961 "superblock": true, 00:08:23.961 "num_base_bdevs": 2, 00:08:23.961 "num_base_bdevs_discovered": 1, 00:08:23.961 "num_base_bdevs_operational": 1, 00:08:23.961 "base_bdevs_list": [ 00:08:23.961 { 00:08:23.961 "name": null, 00:08:23.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.961 "is_configured": false, 00:08:23.961 "data_offset": 0, 00:08:23.961 "data_size": 63488 00:08:23.961 }, 00:08:23.961 { 00:08:23.961 "name": "BaseBdev2", 00:08:23.961 "uuid": "074f716a-d343-4821-899a-b1e8310911f7", 00:08:23.961 "is_configured": true, 00:08:23.961 "data_offset": 2048, 00:08:23.961 "data_size": 63488 00:08:23.961 } 00:08:23.961 ] 00:08:23.961 }' 00:08:23.961 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.961 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.221 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:24.222 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.222 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.222 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:24.222 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.222 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.222 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.222 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.222 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.222 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:24.222 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.222 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.222 [2024-11-18 03:57:20.852483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.222 [2024-11-18 03:57:20.852723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.482 [2024-11-18 03:57:20.955591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.482 [2024-11-18 03:57:20.955759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.482 [2024-11-18 03:57:20.955804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:24.482 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.482 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:24.482 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.482 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.482 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.482 03:57:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:24.482 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.482 03:57:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62936 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62936 ']' 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62936 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62936 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.482 killing process with pid 62936 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62936' 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62936 00:08:24.482 [2024-11-18 03:57:21.034921] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.482 03:57:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62936 00:08:24.482 [2024-11-18 03:57:21.052307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.865 03:57:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:25.865 00:08:25.865 real 0m5.129s 00:08:25.865 user 0m7.314s 00:08:25.865 sys 0m0.861s 00:08:25.865 03:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.865 ************************************ 00:08:25.865 END TEST raid_state_function_test_sb 00:08:25.865 ************************************ 00:08:25.865 03:57:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.865 03:57:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:25.865 03:57:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:25.865 03:57:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.865 03:57:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.865 ************************************ 00:08:25.865 START TEST raid_superblock_test 00:08:25.865 ************************************ 00:08:25.865 03:57:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:25.865 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:25.865 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:25.865 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:25.865 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:25.865 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:25.865 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:25.865 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:25.865 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63188 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63188 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63188 ']' 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.866 03:57:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.866 [2024-11-18 03:57:22.390963] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:25.866 [2024-11-18 03:57:22.391137] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63188 ] 00:08:26.126 [2024-11-18 03:57:22.543037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.126 [2024-11-18 03:57:22.674523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.386 [2024-11-18 03:57:22.902601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.386 [2024-11-18 03:57:22.902785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.645 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.645 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:26.645 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:26.645 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.645 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:26.645 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:26.645 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:26.645 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.645 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.645 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.645 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:26.646 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.646 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.646 malloc1 00:08:26.646 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.646 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:26.646 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.646 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.905 [2024-11-18 03:57:23.285926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:26.905 [2024-11-18 03:57:23.286007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.906 [2024-11-18 03:57:23.286032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:26.906 [2024-11-18 03:57:23.286042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.906 [2024-11-18 03:57:23.288529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.906 [2024-11-18 03:57:23.288568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:26.906 pt1 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 malloc2 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 [2024-11-18 03:57:23.346937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.906 [2024-11-18 03:57:23.347085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.906 [2024-11-18 03:57:23.347123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:26.906 [2024-11-18 03:57:23.347151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.906 [2024-11-18 03:57:23.349505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.906 [2024-11-18 03:57:23.349574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.906 pt2 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 [2024-11-18 03:57:23.358997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:26.906 [2024-11-18 03:57:23.361199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.906 [2024-11-18 03:57:23.361410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:26.906 [2024-11-18 03:57:23.361460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:26.906 [2024-11-18 03:57:23.361718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:26.906 [2024-11-18 03:57:23.361934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:26.906 [2024-11-18 03:57:23.361984] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:26.906 [2024-11-18 03:57:23.362167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.906 "name": "raid_bdev1", 00:08:26.906 "uuid": "69cca686-69a9-453a-93a6-35b6bbae7d04", 00:08:26.906 "strip_size_kb": 0, 00:08:26.906 "state": "online", 00:08:26.906 "raid_level": "raid1", 00:08:26.906 "superblock": true, 00:08:26.906 "num_base_bdevs": 2, 00:08:26.906 "num_base_bdevs_discovered": 2, 00:08:26.906 "num_base_bdevs_operational": 2, 00:08:26.906 "base_bdevs_list": [ 00:08:26.906 { 00:08:26.906 "name": "pt1", 00:08:26.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.906 "is_configured": true, 00:08:26.906 "data_offset": 2048, 00:08:26.906 "data_size": 63488 00:08:26.906 }, 00:08:26.906 { 00:08:26.906 "name": "pt2", 00:08:26.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.906 "is_configured": true, 00:08:26.906 "data_offset": 2048, 00:08:26.906 "data_size": 63488 00:08:26.906 } 00:08:26.906 ] 00:08:26.906 }' 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.906 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.166 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:27.166 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:27.166 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.166 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.166 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.167 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.167 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.167 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.167 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.167 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.167 [2024-11-18 03:57:23.766640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.167 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.167 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.167 "name": "raid_bdev1", 00:08:27.167 "aliases": [ 00:08:27.167 "69cca686-69a9-453a-93a6-35b6bbae7d04" 00:08:27.167 ], 00:08:27.167 "product_name": "Raid Volume", 00:08:27.167 "block_size": 512, 00:08:27.167 "num_blocks": 63488, 00:08:27.167 "uuid": "69cca686-69a9-453a-93a6-35b6bbae7d04", 00:08:27.167 "assigned_rate_limits": { 00:08:27.167 "rw_ios_per_sec": 0, 00:08:27.167 "rw_mbytes_per_sec": 0, 00:08:27.167 "r_mbytes_per_sec": 0, 00:08:27.167 "w_mbytes_per_sec": 0 00:08:27.167 }, 00:08:27.167 "claimed": false, 00:08:27.167 "zoned": false, 00:08:27.167 "supported_io_types": { 00:08:27.167 "read": true, 00:08:27.167 "write": true, 00:08:27.167 "unmap": false, 00:08:27.167 "flush": false, 00:08:27.167 "reset": true, 00:08:27.167 "nvme_admin": false, 00:08:27.167 "nvme_io": false, 00:08:27.167 "nvme_io_md": false, 00:08:27.167 "write_zeroes": true, 00:08:27.167 "zcopy": false, 00:08:27.167 "get_zone_info": false, 00:08:27.167 "zone_management": false, 00:08:27.167 "zone_append": false, 00:08:27.167 "compare": false, 00:08:27.167 "compare_and_write": false, 00:08:27.167 "abort": false, 00:08:27.167 "seek_hole": false, 00:08:27.167 "seek_data": false, 00:08:27.167 "copy": false, 00:08:27.167 "nvme_iov_md": false 00:08:27.167 }, 00:08:27.167 "memory_domains": [ 00:08:27.167 { 00:08:27.167 "dma_device_id": "system", 00:08:27.167 "dma_device_type": 1 00:08:27.167 }, 00:08:27.167 { 00:08:27.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.167 "dma_device_type": 2 00:08:27.167 }, 00:08:27.167 { 00:08:27.167 "dma_device_id": "system", 00:08:27.167 "dma_device_type": 1 00:08:27.167 }, 00:08:27.167 { 00:08:27.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.167 "dma_device_type": 2 00:08:27.167 } 00:08:27.167 ], 00:08:27.167 "driver_specific": { 00:08:27.167 "raid": { 00:08:27.167 "uuid": "69cca686-69a9-453a-93a6-35b6bbae7d04", 00:08:27.167 "strip_size_kb": 0, 00:08:27.167 "state": "online", 00:08:27.167 "raid_level": "raid1", 00:08:27.167 "superblock": true, 00:08:27.167 "num_base_bdevs": 2, 00:08:27.167 "num_base_bdevs_discovered": 2, 00:08:27.167 "num_base_bdevs_operational": 2, 00:08:27.167 "base_bdevs_list": [ 00:08:27.167 { 00:08:27.167 "name": "pt1", 00:08:27.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.167 "is_configured": true, 00:08:27.167 "data_offset": 2048, 00:08:27.167 "data_size": 63488 00:08:27.167 }, 00:08:27.167 { 00:08:27.167 "name": "pt2", 00:08:27.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.167 "is_configured": true, 00:08:27.167 "data_offset": 2048, 00:08:27.167 "data_size": 63488 00:08:27.167 } 00:08:27.167 ] 00:08:27.167 } 00:08:27.167 } 00:08:27.167 }' 00:08:27.167 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:27.427 pt2' 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.427 [2024-11-18 03:57:23.970361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.427 03:57:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.427 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=69cca686-69a9-453a-93a6-35b6bbae7d04 00:08:27.427 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 69cca686-69a9-453a-93a6-35b6bbae7d04 ']' 00:08:27.427 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.427 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.427 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.427 [2024-11-18 03:57:24.017864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.427 [2024-11-18 03:57:24.017900] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.427 [2024-11-18 03:57:24.018005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.427 [2024-11-18 03:57:24.018071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.427 [2024-11-18 03:57:24.018083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:27.427 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.427 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.427 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.427 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.427 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:27.427 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.688 [2024-11-18 03:57:24.153618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:27.688 [2024-11-18 03:57:24.155772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:27.688 [2024-11-18 03:57:24.155937] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:27.688 [2024-11-18 03:57:24.156024] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:27.688 [2024-11-18 03:57:24.156086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.688 [2024-11-18 03:57:24.156113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:27.688 request: 00:08:27.688 { 00:08:27.688 "name": "raid_bdev1", 00:08:27.688 "raid_level": "raid1", 00:08:27.688 "base_bdevs": [ 00:08:27.688 "malloc1", 00:08:27.688 "malloc2" 00:08:27.688 ], 00:08:27.688 "superblock": false, 00:08:27.688 "method": "bdev_raid_create", 00:08:27.688 "req_id": 1 00:08:27.688 } 00:08:27.688 Got JSON-RPC error response 00:08:27.688 response: 00:08:27.688 { 00:08:27.688 "code": -17, 00:08:27.688 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:27.688 } 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.688 [2024-11-18 03:57:24.221576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:27.688 [2024-11-18 03:57:24.221673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.688 [2024-11-18 03:57:24.221694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:27.688 [2024-11-18 03:57:24.221721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.688 [2024-11-18 03:57:24.224301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.688 [2024-11-18 03:57:24.224342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:27.688 [2024-11-18 03:57:24.224444] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:27.688 [2024-11-18 03:57:24.224511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:27.688 pt1 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.688 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.688 "name": "raid_bdev1", 00:08:27.688 "uuid": "69cca686-69a9-453a-93a6-35b6bbae7d04", 00:08:27.688 "strip_size_kb": 0, 00:08:27.688 "state": "configuring", 00:08:27.688 "raid_level": "raid1", 00:08:27.688 "superblock": true, 00:08:27.688 "num_base_bdevs": 2, 00:08:27.688 "num_base_bdevs_discovered": 1, 00:08:27.689 "num_base_bdevs_operational": 2, 00:08:27.689 "base_bdevs_list": [ 00:08:27.689 { 00:08:27.689 "name": "pt1", 00:08:27.689 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.689 "is_configured": true, 00:08:27.689 "data_offset": 2048, 00:08:27.689 "data_size": 63488 00:08:27.689 }, 00:08:27.689 { 00:08:27.689 "name": null, 00:08:27.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.689 "is_configured": false, 00:08:27.689 "data_offset": 2048, 00:08:27.689 "data_size": 63488 00:08:27.689 } 00:08:27.689 ] 00:08:27.689 }' 00:08:27.689 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.689 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.259 [2024-11-18 03:57:24.656817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:28.259 [2024-11-18 03:57:24.657000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.259 [2024-11-18 03:57:24.657043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:28.259 [2024-11-18 03:57:24.657083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.259 [2024-11-18 03:57:24.657656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.259 [2024-11-18 03:57:24.657730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:28.259 [2024-11-18 03:57:24.657861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:28.259 [2024-11-18 03:57:24.657919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:28.259 [2024-11-18 03:57:24.658074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:28.259 [2024-11-18 03:57:24.658118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:28.259 [2024-11-18 03:57:24.658398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:28.259 [2024-11-18 03:57:24.658616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:28.259 [2024-11-18 03:57:24.658658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:28.259 [2024-11-18 03:57:24.658865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.259 pt2 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.259 "name": "raid_bdev1", 00:08:28.259 "uuid": "69cca686-69a9-453a-93a6-35b6bbae7d04", 00:08:28.259 "strip_size_kb": 0, 00:08:28.259 "state": "online", 00:08:28.259 "raid_level": "raid1", 00:08:28.259 "superblock": true, 00:08:28.259 "num_base_bdevs": 2, 00:08:28.259 "num_base_bdevs_discovered": 2, 00:08:28.259 "num_base_bdevs_operational": 2, 00:08:28.259 "base_bdevs_list": [ 00:08:28.259 { 00:08:28.259 "name": "pt1", 00:08:28.259 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.259 "is_configured": true, 00:08:28.259 "data_offset": 2048, 00:08:28.259 "data_size": 63488 00:08:28.259 }, 00:08:28.259 { 00:08:28.259 "name": "pt2", 00:08:28.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.259 "is_configured": true, 00:08:28.259 "data_offset": 2048, 00:08:28.259 "data_size": 63488 00:08:28.259 } 00:08:28.259 ] 00:08:28.259 }' 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.259 03:57:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.520 [2024-11-18 03:57:25.020435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.520 "name": "raid_bdev1", 00:08:28.520 "aliases": [ 00:08:28.520 "69cca686-69a9-453a-93a6-35b6bbae7d04" 00:08:28.520 ], 00:08:28.520 "product_name": "Raid Volume", 00:08:28.520 "block_size": 512, 00:08:28.520 "num_blocks": 63488, 00:08:28.520 "uuid": "69cca686-69a9-453a-93a6-35b6bbae7d04", 00:08:28.520 "assigned_rate_limits": { 00:08:28.520 "rw_ios_per_sec": 0, 00:08:28.520 "rw_mbytes_per_sec": 0, 00:08:28.520 "r_mbytes_per_sec": 0, 00:08:28.520 "w_mbytes_per_sec": 0 00:08:28.520 }, 00:08:28.520 "claimed": false, 00:08:28.520 "zoned": false, 00:08:28.520 "supported_io_types": { 00:08:28.520 "read": true, 00:08:28.520 "write": true, 00:08:28.520 "unmap": false, 00:08:28.520 "flush": false, 00:08:28.520 "reset": true, 00:08:28.520 "nvme_admin": false, 00:08:28.520 "nvme_io": false, 00:08:28.520 "nvme_io_md": false, 00:08:28.520 "write_zeroes": true, 00:08:28.520 "zcopy": false, 00:08:28.520 "get_zone_info": false, 00:08:28.520 "zone_management": false, 00:08:28.520 "zone_append": false, 00:08:28.520 "compare": false, 00:08:28.520 "compare_and_write": false, 00:08:28.520 "abort": false, 00:08:28.520 "seek_hole": false, 00:08:28.520 "seek_data": false, 00:08:28.520 "copy": false, 00:08:28.520 "nvme_iov_md": false 00:08:28.520 }, 00:08:28.520 "memory_domains": [ 00:08:28.520 { 00:08:28.520 "dma_device_id": "system", 00:08:28.520 "dma_device_type": 1 00:08:28.520 }, 00:08:28.520 { 00:08:28.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.520 "dma_device_type": 2 00:08:28.520 }, 00:08:28.520 { 00:08:28.520 "dma_device_id": "system", 00:08:28.520 "dma_device_type": 1 00:08:28.520 }, 00:08:28.520 { 00:08:28.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.520 "dma_device_type": 2 00:08:28.520 } 00:08:28.520 ], 00:08:28.520 "driver_specific": { 00:08:28.520 "raid": { 00:08:28.520 "uuid": "69cca686-69a9-453a-93a6-35b6bbae7d04", 00:08:28.520 "strip_size_kb": 0, 00:08:28.520 "state": "online", 00:08:28.520 "raid_level": "raid1", 00:08:28.520 "superblock": true, 00:08:28.520 "num_base_bdevs": 2, 00:08:28.520 "num_base_bdevs_discovered": 2, 00:08:28.520 "num_base_bdevs_operational": 2, 00:08:28.520 "base_bdevs_list": [ 00:08:28.520 { 00:08:28.520 "name": "pt1", 00:08:28.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.520 "is_configured": true, 00:08:28.520 "data_offset": 2048, 00:08:28.520 "data_size": 63488 00:08:28.520 }, 00:08:28.520 { 00:08:28.520 "name": "pt2", 00:08:28.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.520 "is_configured": true, 00:08:28.520 "data_offset": 2048, 00:08:28.520 "data_size": 63488 00:08:28.520 } 00:08:28.520 ] 00:08:28.520 } 00:08:28.520 } 00:08:28.520 }' 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:28.520 pt2' 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.520 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:28.781 [2024-11-18 03:57:25.263951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 69cca686-69a9-453a-93a6-35b6bbae7d04 '!=' 69cca686-69a9-453a-93a6-35b6bbae7d04 ']' 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.781 [2024-11-18 03:57:25.291709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.781 "name": "raid_bdev1", 00:08:28.781 "uuid": "69cca686-69a9-453a-93a6-35b6bbae7d04", 00:08:28.781 "strip_size_kb": 0, 00:08:28.781 "state": "online", 00:08:28.781 "raid_level": "raid1", 00:08:28.781 "superblock": true, 00:08:28.781 "num_base_bdevs": 2, 00:08:28.781 "num_base_bdevs_discovered": 1, 00:08:28.781 "num_base_bdevs_operational": 1, 00:08:28.781 "base_bdevs_list": [ 00:08:28.781 { 00:08:28.781 "name": null, 00:08:28.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.781 "is_configured": false, 00:08:28.781 "data_offset": 0, 00:08:28.781 "data_size": 63488 00:08:28.781 }, 00:08:28.781 { 00:08:28.781 "name": "pt2", 00:08:28.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.781 "is_configured": true, 00:08:28.781 "data_offset": 2048, 00:08:28.781 "data_size": 63488 00:08:28.781 } 00:08:28.781 ] 00:08:28.781 }' 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.781 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.352 [2024-11-18 03:57:25.766943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.352 [2024-11-18 03:57:25.767069] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.352 [2024-11-18 03:57:25.767186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.352 [2024-11-18 03:57:25.767257] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.352 [2024-11-18 03:57:25.767348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.352 [2024-11-18 03:57:25.838754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.352 [2024-11-18 03:57:25.838926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.352 [2024-11-18 03:57:25.838965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:29.352 [2024-11-18 03:57:25.838997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.352 [2024-11-18 03:57:25.841511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.352 [2024-11-18 03:57:25.841589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.352 [2024-11-18 03:57:25.841709] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:29.352 [2024-11-18 03:57:25.841786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.352 [2024-11-18 03:57:25.841937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:29.352 [2024-11-18 03:57:25.841977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:29.352 [2024-11-18 03:57:25.842213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:29.352 [2024-11-18 03:57:25.842395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:29.352 [2024-11-18 03:57:25.842433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:29.352 [2024-11-18 03:57:25.842619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.352 pt2 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.352 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.352 "name": "raid_bdev1", 00:08:29.352 "uuid": "69cca686-69a9-453a-93a6-35b6bbae7d04", 00:08:29.352 "strip_size_kb": 0, 00:08:29.352 "state": "online", 00:08:29.352 "raid_level": "raid1", 00:08:29.352 "superblock": true, 00:08:29.352 "num_base_bdevs": 2, 00:08:29.352 "num_base_bdevs_discovered": 1, 00:08:29.352 "num_base_bdevs_operational": 1, 00:08:29.352 "base_bdevs_list": [ 00:08:29.352 { 00:08:29.352 "name": null, 00:08:29.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.352 "is_configured": false, 00:08:29.352 "data_offset": 2048, 00:08:29.352 "data_size": 63488 00:08:29.353 }, 00:08:29.353 { 00:08:29.353 "name": "pt2", 00:08:29.353 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.353 "is_configured": true, 00:08:29.353 "data_offset": 2048, 00:08:29.353 "data_size": 63488 00:08:29.353 } 00:08:29.353 ] 00:08:29.353 }' 00:08:29.353 03:57:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.353 03:57:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.922 [2024-11-18 03:57:26.262021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.922 [2024-11-18 03:57:26.262139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.922 [2024-11-18 03:57:26.262249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.922 [2024-11-18 03:57:26.262321] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.922 [2024-11-18 03:57:26.262375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.922 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.922 [2024-11-18 03:57:26.301983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:29.922 [2024-11-18 03:57:26.302072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.922 [2024-11-18 03:57:26.302095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:29.922 [2024-11-18 03:57:26.302105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.922 [2024-11-18 03:57:26.304604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.922 [2024-11-18 03:57:26.304645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:29.922 [2024-11-18 03:57:26.304756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:29.922 [2024-11-18 03:57:26.304804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:29.922 [2024-11-18 03:57:26.304979] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:29.922 [2024-11-18 03:57:26.304990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.922 [2024-11-18 03:57:26.305007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:29.922 [2024-11-18 03:57:26.305083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.922 [2024-11-18 03:57:26.305169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:29.922 [2024-11-18 03:57:26.305183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:29.922 [2024-11-18 03:57:26.305433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:29.923 [2024-11-18 03:57:26.305578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:29.923 [2024-11-18 03:57:26.305590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:29.923 [2024-11-18 03:57:26.305732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.923 pt1 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.923 "name": "raid_bdev1", 00:08:29.923 "uuid": "69cca686-69a9-453a-93a6-35b6bbae7d04", 00:08:29.923 "strip_size_kb": 0, 00:08:29.923 "state": "online", 00:08:29.923 "raid_level": "raid1", 00:08:29.923 "superblock": true, 00:08:29.923 "num_base_bdevs": 2, 00:08:29.923 "num_base_bdevs_discovered": 1, 00:08:29.923 "num_base_bdevs_operational": 1, 00:08:29.923 "base_bdevs_list": [ 00:08:29.923 { 00:08:29.923 "name": null, 00:08:29.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.923 "is_configured": false, 00:08:29.923 "data_offset": 2048, 00:08:29.923 "data_size": 63488 00:08:29.923 }, 00:08:29.923 { 00:08:29.923 "name": "pt2", 00:08:29.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.923 "is_configured": true, 00:08:29.923 "data_offset": 2048, 00:08:29.923 "data_size": 63488 00:08:29.923 } 00:08:29.923 ] 00:08:29.923 }' 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.923 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.183 [2024-11-18 03:57:26.769332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 69cca686-69a9-453a-93a6-35b6bbae7d04 '!=' 69cca686-69a9-453a-93a6-35b6bbae7d04 ']' 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63188 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63188 ']' 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63188 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.183 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63188 00:08:30.443 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.443 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.443 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63188' 00:08:30.443 killing process with pid 63188 00:08:30.443 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63188 00:08:30.443 [2024-11-18 03:57:26.850523] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.443 [2024-11-18 03:57:26.850632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.443 [2024-11-18 03:57:26.850686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.443 [2024-11-18 03:57:26.850702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:30.443 03:57:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63188 00:08:30.443 [2024-11-18 03:57:27.069455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.824 03:57:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:31.824 00:08:31.824 real 0m5.949s 00:08:31.824 user 0m8.798s 00:08:31.824 sys 0m1.108s 00:08:31.824 03:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.824 ************************************ 00:08:31.824 END TEST raid_superblock_test 00:08:31.824 ************************************ 00:08:31.825 03:57:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.825 03:57:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:31.825 03:57:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:31.825 03:57:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.825 03:57:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.825 ************************************ 00:08:31.825 START TEST raid_read_error_test 00:08:31.825 ************************************ 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WoBfasriy1 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63517 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63517 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63517 ']' 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.825 03:57:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.825 [2024-11-18 03:57:28.412370] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:31.825 [2024-11-18 03:57:28.412564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63517 ] 00:08:32.085 [2024-11-18 03:57:28.584342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.085 [2024-11-18 03:57:28.723021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.345 [2024-11-18 03:57:28.952646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.345 [2024-11-18 03:57:28.952841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 BaseBdev1_malloc 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 true 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 [2024-11-18 03:57:29.326457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:32.915 [2024-11-18 03:57:29.326600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.915 [2024-11-18 03:57:29.326639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:32.915 [2024-11-18 03:57:29.326669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.915 [2024-11-18 03:57:29.329051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.915 [2024-11-18 03:57:29.329130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:32.915 BaseBdev1 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 BaseBdev2_malloc 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 true 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 [2024-11-18 03:57:29.398189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:32.915 [2024-11-18 03:57:29.398262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.915 [2024-11-18 03:57:29.398281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:32.915 [2024-11-18 03:57:29.398294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.915 [2024-11-18 03:57:29.400709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.915 [2024-11-18 03:57:29.400835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:32.915 BaseBdev2 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 [2024-11-18 03:57:29.410226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.915 [2024-11-18 03:57:29.412304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.915 [2024-11-18 03:57:29.412554] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:32.915 [2024-11-18 03:57:29.412574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:32.915 [2024-11-18 03:57:29.412817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:32.915 [2024-11-18 03:57:29.413010] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:32.915 [2024-11-18 03:57:29.413020] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:32.915 [2024-11-18 03:57:29.413171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.915 "name": "raid_bdev1", 00:08:32.915 "uuid": "5b682f8c-9eed-4e6b-b7b0-0bb513e82f84", 00:08:32.915 "strip_size_kb": 0, 00:08:32.915 "state": "online", 00:08:32.915 "raid_level": "raid1", 00:08:32.915 "superblock": true, 00:08:32.915 "num_base_bdevs": 2, 00:08:32.915 "num_base_bdevs_discovered": 2, 00:08:32.915 "num_base_bdevs_operational": 2, 00:08:32.915 "base_bdevs_list": [ 00:08:32.915 { 00:08:32.915 "name": "BaseBdev1", 00:08:32.915 "uuid": "6d80315c-9388-5336-a538-74aa3a04302e", 00:08:32.915 "is_configured": true, 00:08:32.915 "data_offset": 2048, 00:08:32.915 "data_size": 63488 00:08:32.915 }, 00:08:32.915 { 00:08:32.915 "name": "BaseBdev2", 00:08:32.915 "uuid": "60663e9d-cfc5-58f1-b69e-4379044225fd", 00:08:32.915 "is_configured": true, 00:08:32.915 "data_offset": 2048, 00:08:32.915 "data_size": 63488 00:08:32.915 } 00:08:32.915 ] 00:08:32.915 }' 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.915 03:57:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.485 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:33.485 03:57:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:33.485 [2024-11-18 03:57:29.966745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.426 "name": "raid_bdev1", 00:08:34.426 "uuid": "5b682f8c-9eed-4e6b-b7b0-0bb513e82f84", 00:08:34.426 "strip_size_kb": 0, 00:08:34.426 "state": "online", 00:08:34.426 "raid_level": "raid1", 00:08:34.426 "superblock": true, 00:08:34.426 "num_base_bdevs": 2, 00:08:34.426 "num_base_bdevs_discovered": 2, 00:08:34.426 "num_base_bdevs_operational": 2, 00:08:34.426 "base_bdevs_list": [ 00:08:34.426 { 00:08:34.426 "name": "BaseBdev1", 00:08:34.426 "uuid": "6d80315c-9388-5336-a538-74aa3a04302e", 00:08:34.426 "is_configured": true, 00:08:34.426 "data_offset": 2048, 00:08:34.426 "data_size": 63488 00:08:34.426 }, 00:08:34.426 { 00:08:34.426 "name": "BaseBdev2", 00:08:34.426 "uuid": "60663e9d-cfc5-58f1-b69e-4379044225fd", 00:08:34.426 "is_configured": true, 00:08:34.426 "data_offset": 2048, 00:08:34.426 "data_size": 63488 00:08:34.426 } 00:08:34.426 ] 00:08:34.426 }' 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.426 03:57:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.687 03:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:34.687 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.687 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.947 [2024-11-18 03:57:31.331775] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:34.947 [2024-11-18 03:57:31.331835] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.947 [2024-11-18 03:57:31.334452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.947 [2024-11-18 03:57:31.334531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.947 [2024-11-18 03:57:31.334620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.947 [2024-11-18 03:57:31.334634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:34.947 { 00:08:34.947 "results": [ 00:08:34.948 { 00:08:34.948 "job": "raid_bdev1", 00:08:34.948 "core_mask": "0x1", 00:08:34.948 "workload": "randrw", 00:08:34.948 "percentage": 50, 00:08:34.948 "status": "finished", 00:08:34.948 "queue_depth": 1, 00:08:34.948 "io_size": 131072, 00:08:34.948 "runtime": 1.365517, 00:08:34.948 "iops": 14924.017789599104, 00:08:34.948 "mibps": 1865.502223699888, 00:08:34.948 "io_failed": 0, 00:08:34.948 "io_timeout": 0, 00:08:34.948 "avg_latency_us": 64.53957968119849, 00:08:34.948 "min_latency_us": 21.351965065502185, 00:08:34.948 "max_latency_us": 1337.907423580786 00:08:34.948 } 00:08:34.948 ], 00:08:34.948 "core_count": 1 00:08:34.948 } 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63517 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63517 ']' 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63517 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63517 00:08:34.948 killing process with pid 63517 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63517' 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63517 00:08:34.948 [2024-11-18 03:57:31.370628] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.948 03:57:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63517 00:08:34.948 [2024-11-18 03:57:31.517949] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.330 03:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WoBfasriy1 00:08:36.330 03:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:36.330 03:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:36.330 03:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:36.330 03:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:36.330 ************************************ 00:08:36.330 END TEST raid_read_error_test 00:08:36.330 ************************************ 00:08:36.330 03:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.330 03:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:36.330 03:57:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:36.330 00:08:36.330 real 0m4.445s 00:08:36.330 user 0m5.241s 00:08:36.330 sys 0m0.600s 00:08:36.330 03:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.330 03:57:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.330 03:57:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:36.330 03:57:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:36.330 03:57:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.330 03:57:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.330 ************************************ 00:08:36.330 START TEST raid_write_error_test 00:08:36.330 ************************************ 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.keiEbjeYMf 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63664 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63664 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63664 ']' 00:08:36.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.330 03:57:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.330 [2024-11-18 03:57:32.937683] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:36.330 [2024-11-18 03:57:32.937799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63664 ] 00:08:36.590 [2024-11-18 03:57:33.110753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.879 [2024-11-18 03:57:33.242466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.879 [2024-11-18 03:57:33.475342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.879 [2024-11-18 03:57:33.475501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.139 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.139 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:37.139 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.139 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:37.139 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.139 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.399 BaseBdev1_malloc 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.399 true 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.399 [2024-11-18 03:57:33.831766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:37.399 [2024-11-18 03:57:33.831848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.399 [2024-11-18 03:57:33.831870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:37.399 [2024-11-18 03:57:33.831882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.399 [2024-11-18 03:57:33.834210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.399 [2024-11-18 03:57:33.834249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:37.399 BaseBdev1 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.399 BaseBdev2_malloc 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.399 true 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.399 [2024-11-18 03:57:33.904651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:37.399 [2024-11-18 03:57:33.904832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.399 [2024-11-18 03:57:33.904873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:37.399 [2024-11-18 03:57:33.904914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.399 [2024-11-18 03:57:33.907298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.399 [2024-11-18 03:57:33.907391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:37.399 BaseBdev2 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.399 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.399 [2024-11-18 03:57:33.916666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.399 [2024-11-18 03:57:33.918731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.399 [2024-11-18 03:57:33.918995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:37.399 [2024-11-18 03:57:33.919015] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:37.399 [2024-11-18 03:57:33.919257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:37.400 [2024-11-18 03:57:33.919465] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:37.400 [2024-11-18 03:57:33.919476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:37.400 [2024-11-18 03:57:33.919630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.400 "name": "raid_bdev1", 00:08:37.400 "uuid": "e1efbb78-3b70-4112-8ffe-af0867533750", 00:08:37.400 "strip_size_kb": 0, 00:08:37.400 "state": "online", 00:08:37.400 "raid_level": "raid1", 00:08:37.400 "superblock": true, 00:08:37.400 "num_base_bdevs": 2, 00:08:37.400 "num_base_bdevs_discovered": 2, 00:08:37.400 "num_base_bdevs_operational": 2, 00:08:37.400 "base_bdevs_list": [ 00:08:37.400 { 00:08:37.400 "name": "BaseBdev1", 00:08:37.400 "uuid": "cd118b02-93ed-572d-ae70-9b8aa7708d78", 00:08:37.400 "is_configured": true, 00:08:37.400 "data_offset": 2048, 00:08:37.400 "data_size": 63488 00:08:37.400 }, 00:08:37.400 { 00:08:37.400 "name": "BaseBdev2", 00:08:37.400 "uuid": "56b285a5-c6a7-5ae2-a408-efdceecf7552", 00:08:37.400 "is_configured": true, 00:08:37.400 "data_offset": 2048, 00:08:37.400 "data_size": 63488 00:08:37.400 } 00:08:37.400 ] 00:08:37.400 }' 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.400 03:57:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.970 03:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:37.970 03:57:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:37.970 [2024-11-18 03:57:34.393361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.911 [2024-11-18 03:57:35.312167] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:38.911 [2024-11-18 03:57:35.312361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.911 [2024-11-18 03:57:35.312605] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.911 "name": "raid_bdev1", 00:08:38.911 "uuid": "e1efbb78-3b70-4112-8ffe-af0867533750", 00:08:38.911 "strip_size_kb": 0, 00:08:38.911 "state": "online", 00:08:38.911 "raid_level": "raid1", 00:08:38.911 "superblock": true, 00:08:38.911 "num_base_bdevs": 2, 00:08:38.911 "num_base_bdevs_discovered": 1, 00:08:38.911 "num_base_bdevs_operational": 1, 00:08:38.911 "base_bdevs_list": [ 00:08:38.911 { 00:08:38.911 "name": null, 00:08:38.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.911 "is_configured": false, 00:08:38.911 "data_offset": 0, 00:08:38.911 "data_size": 63488 00:08:38.911 }, 00:08:38.911 { 00:08:38.911 "name": "BaseBdev2", 00:08:38.911 "uuid": "56b285a5-c6a7-5ae2-a408-efdceecf7552", 00:08:38.911 "is_configured": true, 00:08:38.911 "data_offset": 2048, 00:08:38.911 "data_size": 63488 00:08:38.911 } 00:08:38.911 ] 00:08:38.911 }' 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.911 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.172 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:39.172 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.172 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.172 [2024-11-18 03:57:35.789167] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:39.172 [2024-11-18 03:57:35.789311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.172 [2024-11-18 03:57:35.791873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.172 [2024-11-18 03:57:35.791916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.172 [2024-11-18 03:57:35.791978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.172 [2024-11-18 03:57:35.791991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:39.172 { 00:08:39.172 "results": [ 00:08:39.172 { 00:08:39.172 "job": "raid_bdev1", 00:08:39.172 "core_mask": "0x1", 00:08:39.173 "workload": "randrw", 00:08:39.173 "percentage": 50, 00:08:39.173 "status": "finished", 00:08:39.173 "queue_depth": 1, 00:08:39.173 "io_size": 131072, 00:08:39.173 "runtime": 1.396434, 00:08:39.173 "iops": 18731.998791206744, 00:08:39.173 "mibps": 2341.499848900843, 00:08:39.173 "io_failed": 0, 00:08:39.173 "io_timeout": 0, 00:08:39.173 "avg_latency_us": 50.94405692514852, 00:08:39.173 "min_latency_us": 20.79301310043668, 00:08:39.173 "max_latency_us": 1402.2986899563318 00:08:39.173 } 00:08:39.173 ], 00:08:39.173 "core_count": 1 00:08:39.173 } 00:08:39.173 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.173 03:57:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63664 00:08:39.173 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63664 ']' 00:08:39.173 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63664 00:08:39.173 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:39.173 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.173 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63664 00:08:39.433 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.433 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.433 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63664' 00:08:39.433 killing process with pid 63664 00:08:39.433 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63664 00:08:39.433 [2024-11-18 03:57:35.832675] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.433 03:57:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63664 00:08:39.433 [2024-11-18 03:57:35.978532] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.816 03:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.keiEbjeYMf 00:08:40.816 03:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:40.816 03:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:40.816 03:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:40.816 03:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:40.816 03:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.816 03:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:40.816 03:57:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:40.816 00:08:40.816 real 0m4.394s 00:08:40.816 user 0m5.137s 00:08:40.816 sys 0m0.605s 00:08:40.816 03:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.816 03:57:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.816 ************************************ 00:08:40.816 END TEST raid_write_error_test 00:08:40.816 ************************************ 00:08:40.816 03:57:37 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:40.816 03:57:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:40.816 03:57:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:40.816 03:57:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:40.816 03:57:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.816 03:57:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.816 ************************************ 00:08:40.816 START TEST raid_state_function_test 00:08:40.816 ************************************ 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:40.816 Process raid pid: 63802 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63802 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63802' 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63802 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63802 ']' 00:08:40.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.816 03:57:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.816 [2024-11-18 03:57:37.401433] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:40.816 [2024-11-18 03:57:37.401552] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.076 [2024-11-18 03:57:37.579037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.335 [2024-11-18 03:57:37.717232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.335 [2024-11-18 03:57:37.961887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.335 [2024-11-18 03:57:37.962029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.596 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.596 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:41.596 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:41.596 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.596 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.596 [2024-11-18 03:57:38.229688] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.596 [2024-11-18 03:57:38.229777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.596 [2024-11-18 03:57:38.229788] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.596 [2024-11-18 03:57:38.229797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.596 [2024-11-18 03:57:38.229804] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:41.596 [2024-11-18 03:57:38.229812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.856 "name": "Existed_Raid", 00:08:41.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.856 "strip_size_kb": 64, 00:08:41.856 "state": "configuring", 00:08:41.856 "raid_level": "raid0", 00:08:41.856 "superblock": false, 00:08:41.856 "num_base_bdevs": 3, 00:08:41.856 "num_base_bdevs_discovered": 0, 00:08:41.856 "num_base_bdevs_operational": 3, 00:08:41.856 "base_bdevs_list": [ 00:08:41.856 { 00:08:41.856 "name": "BaseBdev1", 00:08:41.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.856 "is_configured": false, 00:08:41.856 "data_offset": 0, 00:08:41.856 "data_size": 0 00:08:41.856 }, 00:08:41.856 { 00:08:41.856 "name": "BaseBdev2", 00:08:41.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.856 "is_configured": false, 00:08:41.856 "data_offset": 0, 00:08:41.856 "data_size": 0 00:08:41.856 }, 00:08:41.856 { 00:08:41.856 "name": "BaseBdev3", 00:08:41.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.856 "is_configured": false, 00:08:41.856 "data_offset": 0, 00:08:41.856 "data_size": 0 00:08:41.856 } 00:08:41.856 ] 00:08:41.856 }' 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.856 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.116 [2024-11-18 03:57:38.672967] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.116 [2024-11-18 03:57:38.673105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.116 [2024-11-18 03:57:38.684893] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.116 [2024-11-18 03:57:38.684996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.116 [2024-11-18 03:57:38.685023] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.116 [2024-11-18 03:57:38.685046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.116 [2024-11-18 03:57:38.685063] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:42.116 [2024-11-18 03:57:38.685085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.116 [2024-11-18 03:57:38.734080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.116 BaseBdev1 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.116 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.377 [ 00:08:42.377 { 00:08:42.377 "name": "BaseBdev1", 00:08:42.377 "aliases": [ 00:08:42.377 "5fab757d-6abf-4404-8527-bde836a5fa99" 00:08:42.377 ], 00:08:42.377 "product_name": "Malloc disk", 00:08:42.377 "block_size": 512, 00:08:42.377 "num_blocks": 65536, 00:08:42.377 "uuid": "5fab757d-6abf-4404-8527-bde836a5fa99", 00:08:42.377 "assigned_rate_limits": { 00:08:42.377 "rw_ios_per_sec": 0, 00:08:42.377 "rw_mbytes_per_sec": 0, 00:08:42.377 "r_mbytes_per_sec": 0, 00:08:42.377 "w_mbytes_per_sec": 0 00:08:42.377 }, 00:08:42.377 "claimed": true, 00:08:42.377 "claim_type": "exclusive_write", 00:08:42.377 "zoned": false, 00:08:42.377 "supported_io_types": { 00:08:42.377 "read": true, 00:08:42.377 "write": true, 00:08:42.377 "unmap": true, 00:08:42.377 "flush": true, 00:08:42.377 "reset": true, 00:08:42.377 "nvme_admin": false, 00:08:42.377 "nvme_io": false, 00:08:42.377 "nvme_io_md": false, 00:08:42.377 "write_zeroes": true, 00:08:42.377 "zcopy": true, 00:08:42.377 "get_zone_info": false, 00:08:42.377 "zone_management": false, 00:08:42.377 "zone_append": false, 00:08:42.377 "compare": false, 00:08:42.377 "compare_and_write": false, 00:08:42.377 "abort": true, 00:08:42.377 "seek_hole": false, 00:08:42.377 "seek_data": false, 00:08:42.377 "copy": true, 00:08:42.377 "nvme_iov_md": false 00:08:42.377 }, 00:08:42.377 "memory_domains": [ 00:08:42.377 { 00:08:42.377 "dma_device_id": "system", 00:08:42.377 "dma_device_type": 1 00:08:42.377 }, 00:08:42.377 { 00:08:42.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.377 "dma_device_type": 2 00:08:42.377 } 00:08:42.377 ], 00:08:42.377 "driver_specific": {} 00:08:42.377 } 00:08:42.377 ] 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.377 "name": "Existed_Raid", 00:08:42.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.377 "strip_size_kb": 64, 00:08:42.377 "state": "configuring", 00:08:42.377 "raid_level": "raid0", 00:08:42.377 "superblock": false, 00:08:42.377 "num_base_bdevs": 3, 00:08:42.377 "num_base_bdevs_discovered": 1, 00:08:42.377 "num_base_bdevs_operational": 3, 00:08:42.377 "base_bdevs_list": [ 00:08:42.377 { 00:08:42.377 "name": "BaseBdev1", 00:08:42.377 "uuid": "5fab757d-6abf-4404-8527-bde836a5fa99", 00:08:42.377 "is_configured": true, 00:08:42.377 "data_offset": 0, 00:08:42.377 "data_size": 65536 00:08:42.377 }, 00:08:42.377 { 00:08:42.377 "name": "BaseBdev2", 00:08:42.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.377 "is_configured": false, 00:08:42.377 "data_offset": 0, 00:08:42.377 "data_size": 0 00:08:42.377 }, 00:08:42.377 { 00:08:42.377 "name": "BaseBdev3", 00:08:42.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.377 "is_configured": false, 00:08:42.377 "data_offset": 0, 00:08:42.377 "data_size": 0 00:08:42.377 } 00:08:42.377 ] 00:08:42.377 }' 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.377 03:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.637 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.638 [2024-11-18 03:57:39.225320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.638 [2024-11-18 03:57:39.225479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.638 [2024-11-18 03:57:39.233334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.638 [2024-11-18 03:57:39.235489] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.638 [2024-11-18 03:57:39.235538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.638 [2024-11-18 03:57:39.235549] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:42.638 [2024-11-18 03:57:39.235558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.638 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.896 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.897 "name": "Existed_Raid", 00:08:42.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.897 "strip_size_kb": 64, 00:08:42.897 "state": "configuring", 00:08:42.897 "raid_level": "raid0", 00:08:42.897 "superblock": false, 00:08:42.897 "num_base_bdevs": 3, 00:08:42.897 "num_base_bdevs_discovered": 1, 00:08:42.897 "num_base_bdevs_operational": 3, 00:08:42.897 "base_bdevs_list": [ 00:08:42.897 { 00:08:42.897 "name": "BaseBdev1", 00:08:42.897 "uuid": "5fab757d-6abf-4404-8527-bde836a5fa99", 00:08:42.897 "is_configured": true, 00:08:42.897 "data_offset": 0, 00:08:42.897 "data_size": 65536 00:08:42.897 }, 00:08:42.897 { 00:08:42.897 "name": "BaseBdev2", 00:08:42.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.897 "is_configured": false, 00:08:42.897 "data_offset": 0, 00:08:42.897 "data_size": 0 00:08:42.897 }, 00:08:42.897 { 00:08:42.897 "name": "BaseBdev3", 00:08:42.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.897 "is_configured": false, 00:08:42.897 "data_offset": 0, 00:08:42.897 "data_size": 0 00:08:42.897 } 00:08:42.897 ] 00:08:42.897 }' 00:08:42.897 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.897 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.157 [2024-11-18 03:57:39.699955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.157 BaseBdev2 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.157 [ 00:08:43.157 { 00:08:43.157 "name": "BaseBdev2", 00:08:43.157 "aliases": [ 00:08:43.157 "8e2de9a2-057a-40ba-a4ab-2fe7f3ca1cd4" 00:08:43.157 ], 00:08:43.157 "product_name": "Malloc disk", 00:08:43.157 "block_size": 512, 00:08:43.157 "num_blocks": 65536, 00:08:43.157 "uuid": "8e2de9a2-057a-40ba-a4ab-2fe7f3ca1cd4", 00:08:43.157 "assigned_rate_limits": { 00:08:43.157 "rw_ios_per_sec": 0, 00:08:43.157 "rw_mbytes_per_sec": 0, 00:08:43.157 "r_mbytes_per_sec": 0, 00:08:43.157 "w_mbytes_per_sec": 0 00:08:43.157 }, 00:08:43.157 "claimed": true, 00:08:43.157 "claim_type": "exclusive_write", 00:08:43.157 "zoned": false, 00:08:43.157 "supported_io_types": { 00:08:43.157 "read": true, 00:08:43.157 "write": true, 00:08:43.157 "unmap": true, 00:08:43.157 "flush": true, 00:08:43.157 "reset": true, 00:08:43.157 "nvme_admin": false, 00:08:43.157 "nvme_io": false, 00:08:43.157 "nvme_io_md": false, 00:08:43.157 "write_zeroes": true, 00:08:43.157 "zcopy": true, 00:08:43.157 "get_zone_info": false, 00:08:43.157 "zone_management": false, 00:08:43.157 "zone_append": false, 00:08:43.157 "compare": false, 00:08:43.157 "compare_and_write": false, 00:08:43.157 "abort": true, 00:08:43.157 "seek_hole": false, 00:08:43.157 "seek_data": false, 00:08:43.157 "copy": true, 00:08:43.157 "nvme_iov_md": false 00:08:43.157 }, 00:08:43.157 "memory_domains": [ 00:08:43.157 { 00:08:43.157 "dma_device_id": "system", 00:08:43.157 "dma_device_type": 1 00:08:43.157 }, 00:08:43.157 { 00:08:43.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.157 "dma_device_type": 2 00:08:43.157 } 00:08:43.157 ], 00:08:43.157 "driver_specific": {} 00:08:43.157 } 00:08:43.157 ] 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.157 "name": "Existed_Raid", 00:08:43.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.157 "strip_size_kb": 64, 00:08:43.157 "state": "configuring", 00:08:43.157 "raid_level": "raid0", 00:08:43.157 "superblock": false, 00:08:43.157 "num_base_bdevs": 3, 00:08:43.157 "num_base_bdevs_discovered": 2, 00:08:43.157 "num_base_bdevs_operational": 3, 00:08:43.157 "base_bdevs_list": [ 00:08:43.157 { 00:08:43.157 "name": "BaseBdev1", 00:08:43.157 "uuid": "5fab757d-6abf-4404-8527-bde836a5fa99", 00:08:43.157 "is_configured": true, 00:08:43.157 "data_offset": 0, 00:08:43.157 "data_size": 65536 00:08:43.157 }, 00:08:43.157 { 00:08:43.157 "name": "BaseBdev2", 00:08:43.157 "uuid": "8e2de9a2-057a-40ba-a4ab-2fe7f3ca1cd4", 00:08:43.157 "is_configured": true, 00:08:43.157 "data_offset": 0, 00:08:43.157 "data_size": 65536 00:08:43.157 }, 00:08:43.157 { 00:08:43.157 "name": "BaseBdev3", 00:08:43.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.157 "is_configured": false, 00:08:43.157 "data_offset": 0, 00:08:43.157 "data_size": 0 00:08:43.157 } 00:08:43.157 ] 00:08:43.157 }' 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.157 03:57:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.727 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:43.727 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.727 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.727 [2024-11-18 03:57:40.233520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.727 [2024-11-18 03:57:40.233647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.727 [2024-11-18 03:57:40.233680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:43.727 [2024-11-18 03:57:40.234025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:43.727 [2024-11-18 03:57:40.234245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.727 [2024-11-18 03:57:40.234283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:43.727 [2024-11-18 03:57:40.234595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.727 BaseBdev3 00:08:43.727 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.727 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:43.727 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:43.727 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.727 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:43.727 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.727 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.728 [ 00:08:43.728 { 00:08:43.728 "name": "BaseBdev3", 00:08:43.728 "aliases": [ 00:08:43.728 "01726be9-8ccb-4e82-b925-ddcc57993274" 00:08:43.728 ], 00:08:43.728 "product_name": "Malloc disk", 00:08:43.728 "block_size": 512, 00:08:43.728 "num_blocks": 65536, 00:08:43.728 "uuid": "01726be9-8ccb-4e82-b925-ddcc57993274", 00:08:43.728 "assigned_rate_limits": { 00:08:43.728 "rw_ios_per_sec": 0, 00:08:43.728 "rw_mbytes_per_sec": 0, 00:08:43.728 "r_mbytes_per_sec": 0, 00:08:43.728 "w_mbytes_per_sec": 0 00:08:43.728 }, 00:08:43.728 "claimed": true, 00:08:43.728 "claim_type": "exclusive_write", 00:08:43.728 "zoned": false, 00:08:43.728 "supported_io_types": { 00:08:43.728 "read": true, 00:08:43.728 "write": true, 00:08:43.728 "unmap": true, 00:08:43.728 "flush": true, 00:08:43.728 "reset": true, 00:08:43.728 "nvme_admin": false, 00:08:43.728 "nvme_io": false, 00:08:43.728 "nvme_io_md": false, 00:08:43.728 "write_zeroes": true, 00:08:43.728 "zcopy": true, 00:08:43.728 "get_zone_info": false, 00:08:43.728 "zone_management": false, 00:08:43.728 "zone_append": false, 00:08:43.728 "compare": false, 00:08:43.728 "compare_and_write": false, 00:08:43.728 "abort": true, 00:08:43.728 "seek_hole": false, 00:08:43.728 "seek_data": false, 00:08:43.728 "copy": true, 00:08:43.728 "nvme_iov_md": false 00:08:43.728 }, 00:08:43.728 "memory_domains": [ 00:08:43.728 { 00:08:43.728 "dma_device_id": "system", 00:08:43.728 "dma_device_type": 1 00:08:43.728 }, 00:08:43.728 { 00:08:43.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.728 "dma_device_type": 2 00:08:43.728 } 00:08:43.728 ], 00:08:43.728 "driver_specific": {} 00:08:43.728 } 00:08:43.728 ] 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.728 "name": "Existed_Raid", 00:08:43.728 "uuid": "7e9ae87d-d6c8-4bec-83b0-1584ae5b08d3", 00:08:43.728 "strip_size_kb": 64, 00:08:43.728 "state": "online", 00:08:43.728 "raid_level": "raid0", 00:08:43.728 "superblock": false, 00:08:43.728 "num_base_bdevs": 3, 00:08:43.728 "num_base_bdevs_discovered": 3, 00:08:43.728 "num_base_bdevs_operational": 3, 00:08:43.728 "base_bdevs_list": [ 00:08:43.728 { 00:08:43.728 "name": "BaseBdev1", 00:08:43.728 "uuid": "5fab757d-6abf-4404-8527-bde836a5fa99", 00:08:43.728 "is_configured": true, 00:08:43.728 "data_offset": 0, 00:08:43.728 "data_size": 65536 00:08:43.728 }, 00:08:43.728 { 00:08:43.728 "name": "BaseBdev2", 00:08:43.728 "uuid": "8e2de9a2-057a-40ba-a4ab-2fe7f3ca1cd4", 00:08:43.728 "is_configured": true, 00:08:43.728 "data_offset": 0, 00:08:43.728 "data_size": 65536 00:08:43.728 }, 00:08:43.728 { 00:08:43.728 "name": "BaseBdev3", 00:08:43.728 "uuid": "01726be9-8ccb-4e82-b925-ddcc57993274", 00:08:43.728 "is_configured": true, 00:08:43.728 "data_offset": 0, 00:08:43.728 "data_size": 65536 00:08:43.728 } 00:08:43.728 ] 00:08:43.728 }' 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.728 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.308 [2024-11-18 03:57:40.669193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.308 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.308 "name": "Existed_Raid", 00:08:44.308 "aliases": [ 00:08:44.308 "7e9ae87d-d6c8-4bec-83b0-1584ae5b08d3" 00:08:44.308 ], 00:08:44.308 "product_name": "Raid Volume", 00:08:44.308 "block_size": 512, 00:08:44.308 "num_blocks": 196608, 00:08:44.308 "uuid": "7e9ae87d-d6c8-4bec-83b0-1584ae5b08d3", 00:08:44.308 "assigned_rate_limits": { 00:08:44.309 "rw_ios_per_sec": 0, 00:08:44.309 "rw_mbytes_per_sec": 0, 00:08:44.309 "r_mbytes_per_sec": 0, 00:08:44.309 "w_mbytes_per_sec": 0 00:08:44.309 }, 00:08:44.309 "claimed": false, 00:08:44.309 "zoned": false, 00:08:44.309 "supported_io_types": { 00:08:44.309 "read": true, 00:08:44.309 "write": true, 00:08:44.309 "unmap": true, 00:08:44.309 "flush": true, 00:08:44.309 "reset": true, 00:08:44.309 "nvme_admin": false, 00:08:44.309 "nvme_io": false, 00:08:44.309 "nvme_io_md": false, 00:08:44.309 "write_zeroes": true, 00:08:44.309 "zcopy": false, 00:08:44.309 "get_zone_info": false, 00:08:44.309 "zone_management": false, 00:08:44.309 "zone_append": false, 00:08:44.309 "compare": false, 00:08:44.309 "compare_and_write": false, 00:08:44.309 "abort": false, 00:08:44.309 "seek_hole": false, 00:08:44.309 "seek_data": false, 00:08:44.309 "copy": false, 00:08:44.309 "nvme_iov_md": false 00:08:44.309 }, 00:08:44.309 "memory_domains": [ 00:08:44.309 { 00:08:44.309 "dma_device_id": "system", 00:08:44.309 "dma_device_type": 1 00:08:44.309 }, 00:08:44.309 { 00:08:44.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.309 "dma_device_type": 2 00:08:44.309 }, 00:08:44.309 { 00:08:44.309 "dma_device_id": "system", 00:08:44.309 "dma_device_type": 1 00:08:44.309 }, 00:08:44.309 { 00:08:44.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.309 "dma_device_type": 2 00:08:44.309 }, 00:08:44.309 { 00:08:44.309 "dma_device_id": "system", 00:08:44.309 "dma_device_type": 1 00:08:44.309 }, 00:08:44.309 { 00:08:44.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.309 "dma_device_type": 2 00:08:44.309 } 00:08:44.309 ], 00:08:44.309 "driver_specific": { 00:08:44.309 "raid": { 00:08:44.309 "uuid": "7e9ae87d-d6c8-4bec-83b0-1584ae5b08d3", 00:08:44.309 "strip_size_kb": 64, 00:08:44.309 "state": "online", 00:08:44.309 "raid_level": "raid0", 00:08:44.309 "superblock": false, 00:08:44.309 "num_base_bdevs": 3, 00:08:44.309 "num_base_bdevs_discovered": 3, 00:08:44.309 "num_base_bdevs_operational": 3, 00:08:44.309 "base_bdevs_list": [ 00:08:44.309 { 00:08:44.309 "name": "BaseBdev1", 00:08:44.309 "uuid": "5fab757d-6abf-4404-8527-bde836a5fa99", 00:08:44.309 "is_configured": true, 00:08:44.309 "data_offset": 0, 00:08:44.309 "data_size": 65536 00:08:44.309 }, 00:08:44.309 { 00:08:44.309 "name": "BaseBdev2", 00:08:44.309 "uuid": "8e2de9a2-057a-40ba-a4ab-2fe7f3ca1cd4", 00:08:44.309 "is_configured": true, 00:08:44.309 "data_offset": 0, 00:08:44.309 "data_size": 65536 00:08:44.309 }, 00:08:44.309 { 00:08:44.309 "name": "BaseBdev3", 00:08:44.309 "uuid": "01726be9-8ccb-4e82-b925-ddcc57993274", 00:08:44.309 "is_configured": true, 00:08:44.309 "data_offset": 0, 00:08:44.309 "data_size": 65536 00:08:44.309 } 00:08:44.309 ] 00:08:44.309 } 00:08:44.309 } 00:08:44.309 }' 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:44.309 BaseBdev2 00:08:44.309 BaseBdev3' 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.309 03:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.584 [2024-11-18 03:57:40.944509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:44.584 [2024-11-18 03:57:40.944570] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.584 [2024-11-18 03:57:40.944632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.584 "name": "Existed_Raid", 00:08:44.584 "uuid": "7e9ae87d-d6c8-4bec-83b0-1584ae5b08d3", 00:08:44.584 "strip_size_kb": 64, 00:08:44.584 "state": "offline", 00:08:44.584 "raid_level": "raid0", 00:08:44.584 "superblock": false, 00:08:44.584 "num_base_bdevs": 3, 00:08:44.584 "num_base_bdevs_discovered": 2, 00:08:44.584 "num_base_bdevs_operational": 2, 00:08:44.584 "base_bdevs_list": [ 00:08:44.584 { 00:08:44.584 "name": null, 00:08:44.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.584 "is_configured": false, 00:08:44.584 "data_offset": 0, 00:08:44.584 "data_size": 65536 00:08:44.584 }, 00:08:44.584 { 00:08:44.584 "name": "BaseBdev2", 00:08:44.584 "uuid": "8e2de9a2-057a-40ba-a4ab-2fe7f3ca1cd4", 00:08:44.584 "is_configured": true, 00:08:44.584 "data_offset": 0, 00:08:44.584 "data_size": 65536 00:08:44.584 }, 00:08:44.584 { 00:08:44.584 "name": "BaseBdev3", 00:08:44.584 "uuid": "01726be9-8ccb-4e82-b925-ddcc57993274", 00:08:44.584 "is_configured": true, 00:08:44.584 "data_offset": 0, 00:08:44.584 "data_size": 65536 00:08:44.584 } 00:08:44.584 ] 00:08:44.584 }' 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.584 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.154 [2024-11-18 03:57:41.521987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.154 [2024-11-18 03:57:41.687815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:45.154 [2024-11-18 03:57:41.687908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.154 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.415 BaseBdev2 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.415 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.415 [ 00:08:45.415 { 00:08:45.415 "name": "BaseBdev2", 00:08:45.415 "aliases": [ 00:08:45.415 "6d9cd72b-88e0-4f32-b31e-7f4786ff7870" 00:08:45.415 ], 00:08:45.415 "product_name": "Malloc disk", 00:08:45.415 "block_size": 512, 00:08:45.415 "num_blocks": 65536, 00:08:45.415 "uuid": "6d9cd72b-88e0-4f32-b31e-7f4786ff7870", 00:08:45.415 "assigned_rate_limits": { 00:08:45.415 "rw_ios_per_sec": 0, 00:08:45.415 "rw_mbytes_per_sec": 0, 00:08:45.415 "r_mbytes_per_sec": 0, 00:08:45.415 "w_mbytes_per_sec": 0 00:08:45.415 }, 00:08:45.415 "claimed": false, 00:08:45.415 "zoned": false, 00:08:45.415 "supported_io_types": { 00:08:45.415 "read": true, 00:08:45.415 "write": true, 00:08:45.415 "unmap": true, 00:08:45.416 "flush": true, 00:08:45.416 "reset": true, 00:08:45.416 "nvme_admin": false, 00:08:45.416 "nvme_io": false, 00:08:45.416 "nvme_io_md": false, 00:08:45.416 "write_zeroes": true, 00:08:45.416 "zcopy": true, 00:08:45.416 "get_zone_info": false, 00:08:45.416 "zone_management": false, 00:08:45.416 "zone_append": false, 00:08:45.416 "compare": false, 00:08:45.416 "compare_and_write": false, 00:08:45.416 "abort": true, 00:08:45.416 "seek_hole": false, 00:08:45.416 "seek_data": false, 00:08:45.416 "copy": true, 00:08:45.416 "nvme_iov_md": false 00:08:45.416 }, 00:08:45.416 "memory_domains": [ 00:08:45.416 { 00:08:45.416 "dma_device_id": "system", 00:08:45.416 "dma_device_type": 1 00:08:45.416 }, 00:08:45.416 { 00:08:45.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.416 "dma_device_type": 2 00:08:45.416 } 00:08:45.416 ], 00:08:45.416 "driver_specific": {} 00:08:45.416 } 00:08:45.416 ] 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.416 BaseBdev3 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.416 [ 00:08:45.416 { 00:08:45.416 "name": "BaseBdev3", 00:08:45.416 "aliases": [ 00:08:45.416 "c62c0537-b048-4aa5-9d4d-a72459bd9082" 00:08:45.416 ], 00:08:45.416 "product_name": "Malloc disk", 00:08:45.416 "block_size": 512, 00:08:45.416 "num_blocks": 65536, 00:08:45.416 "uuid": "c62c0537-b048-4aa5-9d4d-a72459bd9082", 00:08:45.416 "assigned_rate_limits": { 00:08:45.416 "rw_ios_per_sec": 0, 00:08:45.416 "rw_mbytes_per_sec": 0, 00:08:45.416 "r_mbytes_per_sec": 0, 00:08:45.416 "w_mbytes_per_sec": 0 00:08:45.416 }, 00:08:45.416 "claimed": false, 00:08:45.416 "zoned": false, 00:08:45.416 "supported_io_types": { 00:08:45.416 "read": true, 00:08:45.416 "write": true, 00:08:45.416 "unmap": true, 00:08:45.416 "flush": true, 00:08:45.416 "reset": true, 00:08:45.416 "nvme_admin": false, 00:08:45.416 "nvme_io": false, 00:08:45.416 "nvme_io_md": false, 00:08:45.416 "write_zeroes": true, 00:08:45.416 "zcopy": true, 00:08:45.416 "get_zone_info": false, 00:08:45.416 "zone_management": false, 00:08:45.416 "zone_append": false, 00:08:45.416 "compare": false, 00:08:45.416 "compare_and_write": false, 00:08:45.416 "abort": true, 00:08:45.416 "seek_hole": false, 00:08:45.416 "seek_data": false, 00:08:45.416 "copy": true, 00:08:45.416 "nvme_iov_md": false 00:08:45.416 }, 00:08:45.416 "memory_domains": [ 00:08:45.416 { 00:08:45.416 "dma_device_id": "system", 00:08:45.416 "dma_device_type": 1 00:08:45.416 }, 00:08:45.416 { 00:08:45.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.416 "dma_device_type": 2 00:08:45.416 } 00:08:45.416 ], 00:08:45.416 "driver_specific": {} 00:08:45.416 } 00:08:45.416 ] 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.416 03:57:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.416 [2024-11-18 03:57:42.005998] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.416 [2024-11-18 03:57:42.006144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.416 [2024-11-18 03:57:42.006191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.416 [2024-11-18 03:57:42.008319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.416 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.677 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.677 "name": "Existed_Raid", 00:08:45.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.677 "strip_size_kb": 64, 00:08:45.677 "state": "configuring", 00:08:45.677 "raid_level": "raid0", 00:08:45.677 "superblock": false, 00:08:45.677 "num_base_bdevs": 3, 00:08:45.677 "num_base_bdevs_discovered": 2, 00:08:45.677 "num_base_bdevs_operational": 3, 00:08:45.677 "base_bdevs_list": [ 00:08:45.677 { 00:08:45.677 "name": "BaseBdev1", 00:08:45.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.677 "is_configured": false, 00:08:45.677 "data_offset": 0, 00:08:45.677 "data_size": 0 00:08:45.677 }, 00:08:45.677 { 00:08:45.677 "name": "BaseBdev2", 00:08:45.677 "uuid": "6d9cd72b-88e0-4f32-b31e-7f4786ff7870", 00:08:45.677 "is_configured": true, 00:08:45.677 "data_offset": 0, 00:08:45.677 "data_size": 65536 00:08:45.677 }, 00:08:45.677 { 00:08:45.677 "name": "BaseBdev3", 00:08:45.677 "uuid": "c62c0537-b048-4aa5-9d4d-a72459bd9082", 00:08:45.677 "is_configured": true, 00:08:45.677 "data_offset": 0, 00:08:45.677 "data_size": 65536 00:08:45.677 } 00:08:45.677 ] 00:08:45.677 }' 00:08:45.677 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.677 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 [2024-11-18 03:57:42.477249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.938 "name": "Existed_Raid", 00:08:45.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.938 "strip_size_kb": 64, 00:08:45.938 "state": "configuring", 00:08:45.938 "raid_level": "raid0", 00:08:45.938 "superblock": false, 00:08:45.938 "num_base_bdevs": 3, 00:08:45.938 "num_base_bdevs_discovered": 1, 00:08:45.938 "num_base_bdevs_operational": 3, 00:08:45.938 "base_bdevs_list": [ 00:08:45.938 { 00:08:45.938 "name": "BaseBdev1", 00:08:45.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.938 "is_configured": false, 00:08:45.938 "data_offset": 0, 00:08:45.938 "data_size": 0 00:08:45.938 }, 00:08:45.938 { 00:08:45.938 "name": null, 00:08:45.938 "uuid": "6d9cd72b-88e0-4f32-b31e-7f4786ff7870", 00:08:45.938 "is_configured": false, 00:08:45.938 "data_offset": 0, 00:08:45.938 "data_size": 65536 00:08:45.938 }, 00:08:45.938 { 00:08:45.938 "name": "BaseBdev3", 00:08:45.938 "uuid": "c62c0537-b048-4aa5-9d4d-a72459bd9082", 00:08:45.938 "is_configured": true, 00:08:45.938 "data_offset": 0, 00:08:45.938 "data_size": 65536 00:08:45.938 } 00:08:45.938 ] 00:08:45.938 }' 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.938 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.509 [2024-11-18 03:57:42.983636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.509 BaseBdev1 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.509 03:57:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.509 [ 00:08:46.509 { 00:08:46.509 "name": "BaseBdev1", 00:08:46.509 "aliases": [ 00:08:46.509 "c235832b-0847-4a45-913d-08e4e5d3af84" 00:08:46.509 ], 00:08:46.509 "product_name": "Malloc disk", 00:08:46.509 "block_size": 512, 00:08:46.509 "num_blocks": 65536, 00:08:46.509 "uuid": "c235832b-0847-4a45-913d-08e4e5d3af84", 00:08:46.509 "assigned_rate_limits": { 00:08:46.509 "rw_ios_per_sec": 0, 00:08:46.509 "rw_mbytes_per_sec": 0, 00:08:46.509 "r_mbytes_per_sec": 0, 00:08:46.509 "w_mbytes_per_sec": 0 00:08:46.509 }, 00:08:46.509 "claimed": true, 00:08:46.509 "claim_type": "exclusive_write", 00:08:46.509 "zoned": false, 00:08:46.509 "supported_io_types": { 00:08:46.509 "read": true, 00:08:46.509 "write": true, 00:08:46.509 "unmap": true, 00:08:46.509 "flush": true, 00:08:46.509 "reset": true, 00:08:46.509 "nvme_admin": false, 00:08:46.509 "nvme_io": false, 00:08:46.509 "nvme_io_md": false, 00:08:46.509 "write_zeroes": true, 00:08:46.509 "zcopy": true, 00:08:46.509 "get_zone_info": false, 00:08:46.509 "zone_management": false, 00:08:46.509 "zone_append": false, 00:08:46.509 "compare": false, 00:08:46.509 "compare_and_write": false, 00:08:46.509 "abort": true, 00:08:46.509 "seek_hole": false, 00:08:46.509 "seek_data": false, 00:08:46.509 "copy": true, 00:08:46.509 "nvme_iov_md": false 00:08:46.509 }, 00:08:46.509 "memory_domains": [ 00:08:46.509 { 00:08:46.509 "dma_device_id": "system", 00:08:46.509 "dma_device_type": 1 00:08:46.509 }, 00:08:46.509 { 00:08:46.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.509 "dma_device_type": 2 00:08:46.509 } 00:08:46.509 ], 00:08:46.509 "driver_specific": {} 00:08:46.509 } 00:08:46.509 ] 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.509 "name": "Existed_Raid", 00:08:46.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.509 "strip_size_kb": 64, 00:08:46.509 "state": "configuring", 00:08:46.509 "raid_level": "raid0", 00:08:46.509 "superblock": false, 00:08:46.509 "num_base_bdevs": 3, 00:08:46.509 "num_base_bdevs_discovered": 2, 00:08:46.509 "num_base_bdevs_operational": 3, 00:08:46.509 "base_bdevs_list": [ 00:08:46.509 { 00:08:46.509 "name": "BaseBdev1", 00:08:46.509 "uuid": "c235832b-0847-4a45-913d-08e4e5d3af84", 00:08:46.509 "is_configured": true, 00:08:46.509 "data_offset": 0, 00:08:46.509 "data_size": 65536 00:08:46.509 }, 00:08:46.509 { 00:08:46.509 "name": null, 00:08:46.509 "uuid": "6d9cd72b-88e0-4f32-b31e-7f4786ff7870", 00:08:46.509 "is_configured": false, 00:08:46.509 "data_offset": 0, 00:08:46.509 "data_size": 65536 00:08:46.509 }, 00:08:46.509 { 00:08:46.509 "name": "BaseBdev3", 00:08:46.509 "uuid": "c62c0537-b048-4aa5-9d4d-a72459bd9082", 00:08:46.509 "is_configured": true, 00:08:46.509 "data_offset": 0, 00:08:46.509 "data_size": 65536 00:08:46.509 } 00:08:46.509 ] 00:08:46.509 }' 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.509 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.080 [2024-11-18 03:57:43.546754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.080 "name": "Existed_Raid", 00:08:47.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.080 "strip_size_kb": 64, 00:08:47.080 "state": "configuring", 00:08:47.080 "raid_level": "raid0", 00:08:47.080 "superblock": false, 00:08:47.080 "num_base_bdevs": 3, 00:08:47.080 "num_base_bdevs_discovered": 1, 00:08:47.080 "num_base_bdevs_operational": 3, 00:08:47.080 "base_bdevs_list": [ 00:08:47.080 { 00:08:47.080 "name": "BaseBdev1", 00:08:47.080 "uuid": "c235832b-0847-4a45-913d-08e4e5d3af84", 00:08:47.080 "is_configured": true, 00:08:47.080 "data_offset": 0, 00:08:47.080 "data_size": 65536 00:08:47.080 }, 00:08:47.080 { 00:08:47.080 "name": null, 00:08:47.080 "uuid": "6d9cd72b-88e0-4f32-b31e-7f4786ff7870", 00:08:47.080 "is_configured": false, 00:08:47.080 "data_offset": 0, 00:08:47.080 "data_size": 65536 00:08:47.080 }, 00:08:47.080 { 00:08:47.080 "name": null, 00:08:47.080 "uuid": "c62c0537-b048-4aa5-9d4d-a72459bd9082", 00:08:47.080 "is_configured": false, 00:08:47.080 "data_offset": 0, 00:08:47.080 "data_size": 65536 00:08:47.080 } 00:08:47.080 ] 00:08:47.080 }' 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.080 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.340 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.340 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.340 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.340 03:57:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:47.600 03:57:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.600 [2024-11-18 03:57:44.021982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.600 "name": "Existed_Raid", 00:08:47.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.600 "strip_size_kb": 64, 00:08:47.600 "state": "configuring", 00:08:47.600 "raid_level": "raid0", 00:08:47.600 "superblock": false, 00:08:47.600 "num_base_bdevs": 3, 00:08:47.600 "num_base_bdevs_discovered": 2, 00:08:47.600 "num_base_bdevs_operational": 3, 00:08:47.600 "base_bdevs_list": [ 00:08:47.600 { 00:08:47.600 "name": "BaseBdev1", 00:08:47.600 "uuid": "c235832b-0847-4a45-913d-08e4e5d3af84", 00:08:47.600 "is_configured": true, 00:08:47.600 "data_offset": 0, 00:08:47.600 "data_size": 65536 00:08:47.600 }, 00:08:47.600 { 00:08:47.600 "name": null, 00:08:47.600 "uuid": "6d9cd72b-88e0-4f32-b31e-7f4786ff7870", 00:08:47.600 "is_configured": false, 00:08:47.600 "data_offset": 0, 00:08:47.600 "data_size": 65536 00:08:47.600 }, 00:08:47.600 { 00:08:47.600 "name": "BaseBdev3", 00:08:47.600 "uuid": "c62c0537-b048-4aa5-9d4d-a72459bd9082", 00:08:47.600 "is_configured": true, 00:08:47.600 "data_offset": 0, 00:08:47.600 "data_size": 65536 00:08:47.600 } 00:08:47.600 ] 00:08:47.600 }' 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.600 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.860 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.860 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:47.860 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.860 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.860 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.120 [2024-11-18 03:57:44.517139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.120 "name": "Existed_Raid", 00:08:48.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.120 "strip_size_kb": 64, 00:08:48.120 "state": "configuring", 00:08:48.120 "raid_level": "raid0", 00:08:48.120 "superblock": false, 00:08:48.120 "num_base_bdevs": 3, 00:08:48.120 "num_base_bdevs_discovered": 1, 00:08:48.120 "num_base_bdevs_operational": 3, 00:08:48.120 "base_bdevs_list": [ 00:08:48.120 { 00:08:48.120 "name": null, 00:08:48.120 "uuid": "c235832b-0847-4a45-913d-08e4e5d3af84", 00:08:48.120 "is_configured": false, 00:08:48.120 "data_offset": 0, 00:08:48.120 "data_size": 65536 00:08:48.120 }, 00:08:48.120 { 00:08:48.120 "name": null, 00:08:48.120 "uuid": "6d9cd72b-88e0-4f32-b31e-7f4786ff7870", 00:08:48.120 "is_configured": false, 00:08:48.120 "data_offset": 0, 00:08:48.120 "data_size": 65536 00:08:48.120 }, 00:08:48.120 { 00:08:48.120 "name": "BaseBdev3", 00:08:48.120 "uuid": "c62c0537-b048-4aa5-9d4d-a72459bd9082", 00:08:48.120 "is_configured": true, 00:08:48.120 "data_offset": 0, 00:08:48.120 "data_size": 65536 00:08:48.120 } 00:08:48.120 ] 00:08:48.120 }' 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.120 03:57:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.690 [2024-11-18 03:57:45.136220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.690 "name": "Existed_Raid", 00:08:48.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.690 "strip_size_kb": 64, 00:08:48.690 "state": "configuring", 00:08:48.690 "raid_level": "raid0", 00:08:48.690 "superblock": false, 00:08:48.690 "num_base_bdevs": 3, 00:08:48.690 "num_base_bdevs_discovered": 2, 00:08:48.690 "num_base_bdevs_operational": 3, 00:08:48.690 "base_bdevs_list": [ 00:08:48.690 { 00:08:48.690 "name": null, 00:08:48.690 "uuid": "c235832b-0847-4a45-913d-08e4e5d3af84", 00:08:48.690 "is_configured": false, 00:08:48.690 "data_offset": 0, 00:08:48.690 "data_size": 65536 00:08:48.690 }, 00:08:48.690 { 00:08:48.690 "name": "BaseBdev2", 00:08:48.690 "uuid": "6d9cd72b-88e0-4f32-b31e-7f4786ff7870", 00:08:48.690 "is_configured": true, 00:08:48.690 "data_offset": 0, 00:08:48.690 "data_size": 65536 00:08:48.690 }, 00:08:48.690 { 00:08:48.690 "name": "BaseBdev3", 00:08:48.690 "uuid": "c62c0537-b048-4aa5-9d4d-a72459bd9082", 00:08:48.690 "is_configured": true, 00:08:48.690 "data_offset": 0, 00:08:48.690 "data_size": 65536 00:08:48.690 } 00:08:48.690 ] 00:08:48.690 }' 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.690 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c235832b-0847-4a45-913d-08e4e5d3af84 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.260 [2024-11-18 03:57:45.741011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:49.260 [2024-11-18 03:57:45.741147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:49.260 [2024-11-18 03:57:45.741163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:49.260 [2024-11-18 03:57:45.741440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:49.260 [2024-11-18 03:57:45.741616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:49.260 [2024-11-18 03:57:45.741625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:49.260 [2024-11-18 03:57:45.741864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.260 NewBaseBdev 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:49.260 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.261 [ 00:08:49.261 { 00:08:49.261 "name": "NewBaseBdev", 00:08:49.261 "aliases": [ 00:08:49.261 "c235832b-0847-4a45-913d-08e4e5d3af84" 00:08:49.261 ], 00:08:49.261 "product_name": "Malloc disk", 00:08:49.261 "block_size": 512, 00:08:49.261 "num_blocks": 65536, 00:08:49.261 "uuid": "c235832b-0847-4a45-913d-08e4e5d3af84", 00:08:49.261 "assigned_rate_limits": { 00:08:49.261 "rw_ios_per_sec": 0, 00:08:49.261 "rw_mbytes_per_sec": 0, 00:08:49.261 "r_mbytes_per_sec": 0, 00:08:49.261 "w_mbytes_per_sec": 0 00:08:49.261 }, 00:08:49.261 "claimed": true, 00:08:49.261 "claim_type": "exclusive_write", 00:08:49.261 "zoned": false, 00:08:49.261 "supported_io_types": { 00:08:49.261 "read": true, 00:08:49.261 "write": true, 00:08:49.261 "unmap": true, 00:08:49.261 "flush": true, 00:08:49.261 "reset": true, 00:08:49.261 "nvme_admin": false, 00:08:49.261 "nvme_io": false, 00:08:49.261 "nvme_io_md": false, 00:08:49.261 "write_zeroes": true, 00:08:49.261 "zcopy": true, 00:08:49.261 "get_zone_info": false, 00:08:49.261 "zone_management": false, 00:08:49.261 "zone_append": false, 00:08:49.261 "compare": false, 00:08:49.261 "compare_and_write": false, 00:08:49.261 "abort": true, 00:08:49.261 "seek_hole": false, 00:08:49.261 "seek_data": false, 00:08:49.261 "copy": true, 00:08:49.261 "nvme_iov_md": false 00:08:49.261 }, 00:08:49.261 "memory_domains": [ 00:08:49.261 { 00:08:49.261 "dma_device_id": "system", 00:08:49.261 "dma_device_type": 1 00:08:49.261 }, 00:08:49.261 { 00:08:49.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.261 "dma_device_type": 2 00:08:49.261 } 00:08:49.261 ], 00:08:49.261 "driver_specific": {} 00:08:49.261 } 00:08:49.261 ] 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.261 "name": "Existed_Raid", 00:08:49.261 "uuid": "64e9b0b8-f639-4175-b174-47eb4eb3fa0c", 00:08:49.261 "strip_size_kb": 64, 00:08:49.261 "state": "online", 00:08:49.261 "raid_level": "raid0", 00:08:49.261 "superblock": false, 00:08:49.261 "num_base_bdevs": 3, 00:08:49.261 "num_base_bdevs_discovered": 3, 00:08:49.261 "num_base_bdevs_operational": 3, 00:08:49.261 "base_bdevs_list": [ 00:08:49.261 { 00:08:49.261 "name": "NewBaseBdev", 00:08:49.261 "uuid": "c235832b-0847-4a45-913d-08e4e5d3af84", 00:08:49.261 "is_configured": true, 00:08:49.261 "data_offset": 0, 00:08:49.261 "data_size": 65536 00:08:49.261 }, 00:08:49.261 { 00:08:49.261 "name": "BaseBdev2", 00:08:49.261 "uuid": "6d9cd72b-88e0-4f32-b31e-7f4786ff7870", 00:08:49.261 "is_configured": true, 00:08:49.261 "data_offset": 0, 00:08:49.261 "data_size": 65536 00:08:49.261 }, 00:08:49.261 { 00:08:49.261 "name": "BaseBdev3", 00:08:49.261 "uuid": "c62c0537-b048-4aa5-9d4d-a72459bd9082", 00:08:49.261 "is_configured": true, 00:08:49.261 "data_offset": 0, 00:08:49.261 "data_size": 65536 00:08:49.261 } 00:08:49.261 ] 00:08:49.261 }' 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.261 03:57:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.831 [2024-11-18 03:57:46.212615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.831 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.831 "name": "Existed_Raid", 00:08:49.831 "aliases": [ 00:08:49.831 "64e9b0b8-f639-4175-b174-47eb4eb3fa0c" 00:08:49.831 ], 00:08:49.831 "product_name": "Raid Volume", 00:08:49.831 "block_size": 512, 00:08:49.831 "num_blocks": 196608, 00:08:49.831 "uuid": "64e9b0b8-f639-4175-b174-47eb4eb3fa0c", 00:08:49.831 "assigned_rate_limits": { 00:08:49.831 "rw_ios_per_sec": 0, 00:08:49.831 "rw_mbytes_per_sec": 0, 00:08:49.831 "r_mbytes_per_sec": 0, 00:08:49.831 "w_mbytes_per_sec": 0 00:08:49.831 }, 00:08:49.831 "claimed": false, 00:08:49.831 "zoned": false, 00:08:49.831 "supported_io_types": { 00:08:49.831 "read": true, 00:08:49.831 "write": true, 00:08:49.831 "unmap": true, 00:08:49.831 "flush": true, 00:08:49.831 "reset": true, 00:08:49.831 "nvme_admin": false, 00:08:49.831 "nvme_io": false, 00:08:49.831 "nvme_io_md": false, 00:08:49.831 "write_zeroes": true, 00:08:49.831 "zcopy": false, 00:08:49.831 "get_zone_info": false, 00:08:49.831 "zone_management": false, 00:08:49.831 "zone_append": false, 00:08:49.831 "compare": false, 00:08:49.831 "compare_and_write": false, 00:08:49.831 "abort": false, 00:08:49.831 "seek_hole": false, 00:08:49.831 "seek_data": false, 00:08:49.831 "copy": false, 00:08:49.831 "nvme_iov_md": false 00:08:49.831 }, 00:08:49.831 "memory_domains": [ 00:08:49.831 { 00:08:49.831 "dma_device_id": "system", 00:08:49.831 "dma_device_type": 1 00:08:49.831 }, 00:08:49.831 { 00:08:49.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.831 "dma_device_type": 2 00:08:49.831 }, 00:08:49.831 { 00:08:49.831 "dma_device_id": "system", 00:08:49.831 "dma_device_type": 1 00:08:49.831 }, 00:08:49.831 { 00:08:49.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.831 "dma_device_type": 2 00:08:49.831 }, 00:08:49.831 { 00:08:49.831 "dma_device_id": "system", 00:08:49.831 "dma_device_type": 1 00:08:49.831 }, 00:08:49.831 { 00:08:49.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.831 "dma_device_type": 2 00:08:49.831 } 00:08:49.831 ], 00:08:49.831 "driver_specific": { 00:08:49.831 "raid": { 00:08:49.831 "uuid": "64e9b0b8-f639-4175-b174-47eb4eb3fa0c", 00:08:49.831 "strip_size_kb": 64, 00:08:49.831 "state": "online", 00:08:49.831 "raid_level": "raid0", 00:08:49.831 "superblock": false, 00:08:49.831 "num_base_bdevs": 3, 00:08:49.831 "num_base_bdevs_discovered": 3, 00:08:49.831 "num_base_bdevs_operational": 3, 00:08:49.831 "base_bdevs_list": [ 00:08:49.831 { 00:08:49.831 "name": "NewBaseBdev", 00:08:49.831 "uuid": "c235832b-0847-4a45-913d-08e4e5d3af84", 00:08:49.831 "is_configured": true, 00:08:49.831 "data_offset": 0, 00:08:49.831 "data_size": 65536 00:08:49.831 }, 00:08:49.832 { 00:08:49.832 "name": "BaseBdev2", 00:08:49.832 "uuid": "6d9cd72b-88e0-4f32-b31e-7f4786ff7870", 00:08:49.832 "is_configured": true, 00:08:49.832 "data_offset": 0, 00:08:49.832 "data_size": 65536 00:08:49.832 }, 00:08:49.832 { 00:08:49.832 "name": "BaseBdev3", 00:08:49.832 "uuid": "c62c0537-b048-4aa5-9d4d-a72459bd9082", 00:08:49.832 "is_configured": true, 00:08:49.832 "data_offset": 0, 00:08:49.832 "data_size": 65536 00:08:49.832 } 00:08:49.832 ] 00:08:49.832 } 00:08:49.832 } 00:08:49.832 }' 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:49.832 BaseBdev2 00:08:49.832 BaseBdev3' 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.832 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.092 [2024-11-18 03:57:46.471926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.092 [2024-11-18 03:57:46.472059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.092 [2024-11-18 03:57:46.472193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.092 [2024-11-18 03:57:46.472273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.092 [2024-11-18 03:57:46.472317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:50.092 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.092 03:57:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63802 00:08:50.092 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63802 ']' 00:08:50.092 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63802 00:08:50.092 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:50.092 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.093 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63802 00:08:50.093 killing process with pid 63802 00:08:50.093 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.093 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.093 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63802' 00:08:50.093 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63802 00:08:50.093 [2024-11-18 03:57:46.520085] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.093 03:57:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63802 00:08:50.352 [2024-11-18 03:57:46.843486] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:51.760 00:08:51.760 real 0m10.738s 00:08:51.760 user 0m16.942s 00:08:51.760 sys 0m1.868s 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.760 ************************************ 00:08:51.760 END TEST raid_state_function_test 00:08:51.760 ************************************ 00:08:51.760 03:57:48 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:51.760 03:57:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:51.760 03:57:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.760 03:57:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.760 ************************************ 00:08:51.760 START TEST raid_state_function_test_sb 00:08:51.760 ************************************ 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64423 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64423' 00:08:51.760 Process raid pid: 64423 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64423 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64423 ']' 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.760 03:57:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.760 [2024-11-18 03:57:48.213885] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:51.760 [2024-11-18 03:57:48.213997] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.760 [2024-11-18 03:57:48.387593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.021 [2024-11-18 03:57:48.524359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.281 [2024-11-18 03:57:48.758371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.281 [2024-11-18 03:57:48.758416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.541 [2024-11-18 03:57:49.044299] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.541 [2024-11-18 03:57:49.044371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.541 [2024-11-18 03:57:49.044382] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.541 [2024-11-18 03:57:49.044392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.541 [2024-11-18 03:57:49.044398] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:52.541 [2024-11-18 03:57:49.044408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.541 "name": "Existed_Raid", 00:08:52.541 "uuid": "7766e261-7a42-4fe9-9daa-01d247dc4183", 00:08:52.541 "strip_size_kb": 64, 00:08:52.541 "state": "configuring", 00:08:52.541 "raid_level": "raid0", 00:08:52.541 "superblock": true, 00:08:52.541 "num_base_bdevs": 3, 00:08:52.541 "num_base_bdevs_discovered": 0, 00:08:52.541 "num_base_bdevs_operational": 3, 00:08:52.541 "base_bdevs_list": [ 00:08:52.541 { 00:08:52.541 "name": "BaseBdev1", 00:08:52.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.541 "is_configured": false, 00:08:52.541 "data_offset": 0, 00:08:52.541 "data_size": 0 00:08:52.541 }, 00:08:52.541 { 00:08:52.541 "name": "BaseBdev2", 00:08:52.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.541 "is_configured": false, 00:08:52.541 "data_offset": 0, 00:08:52.541 "data_size": 0 00:08:52.541 }, 00:08:52.541 { 00:08:52.541 "name": "BaseBdev3", 00:08:52.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.541 "is_configured": false, 00:08:52.541 "data_offset": 0, 00:08:52.541 "data_size": 0 00:08:52.541 } 00:08:52.541 ] 00:08:52.541 }' 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.541 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.110 [2024-11-18 03:57:49.503599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.110 [2024-11-18 03:57:49.503745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.110 [2024-11-18 03:57:49.515560] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.110 [2024-11-18 03:57:49.515692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.110 [2024-11-18 03:57:49.515720] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.110 [2024-11-18 03:57:49.515743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.110 [2024-11-18 03:57:49.515760] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.110 [2024-11-18 03:57:49.515781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.110 [2024-11-18 03:57:49.569356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.110 BaseBdev1 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.110 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.110 [ 00:08:53.110 { 00:08:53.110 "name": "BaseBdev1", 00:08:53.110 "aliases": [ 00:08:53.110 "5a87abb7-8c91-4e34-ad2b-dde2ffbd51f2" 00:08:53.110 ], 00:08:53.110 "product_name": "Malloc disk", 00:08:53.110 "block_size": 512, 00:08:53.110 "num_blocks": 65536, 00:08:53.110 "uuid": "5a87abb7-8c91-4e34-ad2b-dde2ffbd51f2", 00:08:53.110 "assigned_rate_limits": { 00:08:53.110 "rw_ios_per_sec": 0, 00:08:53.110 "rw_mbytes_per_sec": 0, 00:08:53.110 "r_mbytes_per_sec": 0, 00:08:53.110 "w_mbytes_per_sec": 0 00:08:53.110 }, 00:08:53.110 "claimed": true, 00:08:53.110 "claim_type": "exclusive_write", 00:08:53.110 "zoned": false, 00:08:53.110 "supported_io_types": { 00:08:53.110 "read": true, 00:08:53.110 "write": true, 00:08:53.110 "unmap": true, 00:08:53.110 "flush": true, 00:08:53.110 "reset": true, 00:08:53.110 "nvme_admin": false, 00:08:53.110 "nvme_io": false, 00:08:53.110 "nvme_io_md": false, 00:08:53.110 "write_zeroes": true, 00:08:53.111 "zcopy": true, 00:08:53.111 "get_zone_info": false, 00:08:53.111 "zone_management": false, 00:08:53.111 "zone_append": false, 00:08:53.111 "compare": false, 00:08:53.111 "compare_and_write": false, 00:08:53.111 "abort": true, 00:08:53.111 "seek_hole": false, 00:08:53.111 "seek_data": false, 00:08:53.111 "copy": true, 00:08:53.111 "nvme_iov_md": false 00:08:53.111 }, 00:08:53.111 "memory_domains": [ 00:08:53.111 { 00:08:53.111 "dma_device_id": "system", 00:08:53.111 "dma_device_type": 1 00:08:53.111 }, 00:08:53.111 { 00:08:53.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.111 "dma_device_type": 2 00:08:53.111 } 00:08:53.111 ], 00:08:53.111 "driver_specific": {} 00:08:53.111 } 00:08:53.111 ] 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.111 "name": "Existed_Raid", 00:08:53.111 "uuid": "53fe30a7-d912-49e5-a871-9ff3d845f702", 00:08:53.111 "strip_size_kb": 64, 00:08:53.111 "state": "configuring", 00:08:53.111 "raid_level": "raid0", 00:08:53.111 "superblock": true, 00:08:53.111 "num_base_bdevs": 3, 00:08:53.111 "num_base_bdevs_discovered": 1, 00:08:53.111 "num_base_bdevs_operational": 3, 00:08:53.111 "base_bdevs_list": [ 00:08:53.111 { 00:08:53.111 "name": "BaseBdev1", 00:08:53.111 "uuid": "5a87abb7-8c91-4e34-ad2b-dde2ffbd51f2", 00:08:53.111 "is_configured": true, 00:08:53.111 "data_offset": 2048, 00:08:53.111 "data_size": 63488 00:08:53.111 }, 00:08:53.111 { 00:08:53.111 "name": "BaseBdev2", 00:08:53.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.111 "is_configured": false, 00:08:53.111 "data_offset": 0, 00:08:53.111 "data_size": 0 00:08:53.111 }, 00:08:53.111 { 00:08:53.111 "name": "BaseBdev3", 00:08:53.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.111 "is_configured": false, 00:08:53.111 "data_offset": 0, 00:08:53.111 "data_size": 0 00:08:53.111 } 00:08:53.111 ] 00:08:53.111 }' 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.111 03:57:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.679 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.679 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.679 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.679 [2024-11-18 03:57:50.104540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.679 [2024-11-18 03:57:50.104620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:53.679 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.679 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.679 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.679 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.679 [2024-11-18 03:57:50.112561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.679 [2024-11-18 03:57:50.114616] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.679 [2024-11-18 03:57:50.114738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.679 [2024-11-18 03:57:50.114753] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.679 [2024-11-18 03:57:50.114762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.679 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.679 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:53.679 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.680 "name": "Existed_Raid", 00:08:53.680 "uuid": "adbf1c0a-043f-4570-bd75-1b49eccaac48", 00:08:53.680 "strip_size_kb": 64, 00:08:53.680 "state": "configuring", 00:08:53.680 "raid_level": "raid0", 00:08:53.680 "superblock": true, 00:08:53.680 "num_base_bdevs": 3, 00:08:53.680 "num_base_bdevs_discovered": 1, 00:08:53.680 "num_base_bdevs_operational": 3, 00:08:53.680 "base_bdevs_list": [ 00:08:53.680 { 00:08:53.680 "name": "BaseBdev1", 00:08:53.680 "uuid": "5a87abb7-8c91-4e34-ad2b-dde2ffbd51f2", 00:08:53.680 "is_configured": true, 00:08:53.680 "data_offset": 2048, 00:08:53.680 "data_size": 63488 00:08:53.680 }, 00:08:53.680 { 00:08:53.680 "name": "BaseBdev2", 00:08:53.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.680 "is_configured": false, 00:08:53.680 "data_offset": 0, 00:08:53.680 "data_size": 0 00:08:53.680 }, 00:08:53.680 { 00:08:53.680 "name": "BaseBdev3", 00:08:53.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.680 "is_configured": false, 00:08:53.680 "data_offset": 0, 00:08:53.680 "data_size": 0 00:08:53.680 } 00:08:53.680 ] 00:08:53.680 }' 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.680 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.940 [2024-11-18 03:57:50.560409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.940 BaseBdev2 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.940 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.200 [ 00:08:54.200 { 00:08:54.200 "name": "BaseBdev2", 00:08:54.200 "aliases": [ 00:08:54.200 "b855df63-f4c5-4e19-948b-e709b33fcadb" 00:08:54.200 ], 00:08:54.200 "product_name": "Malloc disk", 00:08:54.200 "block_size": 512, 00:08:54.200 "num_blocks": 65536, 00:08:54.200 "uuid": "b855df63-f4c5-4e19-948b-e709b33fcadb", 00:08:54.200 "assigned_rate_limits": { 00:08:54.200 "rw_ios_per_sec": 0, 00:08:54.200 "rw_mbytes_per_sec": 0, 00:08:54.200 "r_mbytes_per_sec": 0, 00:08:54.200 "w_mbytes_per_sec": 0 00:08:54.200 }, 00:08:54.200 "claimed": true, 00:08:54.200 "claim_type": "exclusive_write", 00:08:54.200 "zoned": false, 00:08:54.200 "supported_io_types": { 00:08:54.200 "read": true, 00:08:54.200 "write": true, 00:08:54.200 "unmap": true, 00:08:54.200 "flush": true, 00:08:54.200 "reset": true, 00:08:54.200 "nvme_admin": false, 00:08:54.200 "nvme_io": false, 00:08:54.200 "nvme_io_md": false, 00:08:54.200 "write_zeroes": true, 00:08:54.200 "zcopy": true, 00:08:54.200 "get_zone_info": false, 00:08:54.200 "zone_management": false, 00:08:54.200 "zone_append": false, 00:08:54.200 "compare": false, 00:08:54.200 "compare_and_write": false, 00:08:54.200 "abort": true, 00:08:54.200 "seek_hole": false, 00:08:54.200 "seek_data": false, 00:08:54.200 "copy": true, 00:08:54.200 "nvme_iov_md": false 00:08:54.200 }, 00:08:54.200 "memory_domains": [ 00:08:54.200 { 00:08:54.200 "dma_device_id": "system", 00:08:54.200 "dma_device_type": 1 00:08:54.200 }, 00:08:54.200 { 00:08:54.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.200 "dma_device_type": 2 00:08:54.200 } 00:08:54.200 ], 00:08:54.200 "driver_specific": {} 00:08:54.200 } 00:08:54.200 ] 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.200 "name": "Existed_Raid", 00:08:54.200 "uuid": "adbf1c0a-043f-4570-bd75-1b49eccaac48", 00:08:54.200 "strip_size_kb": 64, 00:08:54.200 "state": "configuring", 00:08:54.200 "raid_level": "raid0", 00:08:54.200 "superblock": true, 00:08:54.200 "num_base_bdevs": 3, 00:08:54.200 "num_base_bdevs_discovered": 2, 00:08:54.200 "num_base_bdevs_operational": 3, 00:08:54.200 "base_bdevs_list": [ 00:08:54.200 { 00:08:54.200 "name": "BaseBdev1", 00:08:54.200 "uuid": "5a87abb7-8c91-4e34-ad2b-dde2ffbd51f2", 00:08:54.200 "is_configured": true, 00:08:54.200 "data_offset": 2048, 00:08:54.200 "data_size": 63488 00:08:54.200 }, 00:08:54.200 { 00:08:54.200 "name": "BaseBdev2", 00:08:54.200 "uuid": "b855df63-f4c5-4e19-948b-e709b33fcadb", 00:08:54.200 "is_configured": true, 00:08:54.200 "data_offset": 2048, 00:08:54.200 "data_size": 63488 00:08:54.200 }, 00:08:54.200 { 00:08:54.200 "name": "BaseBdev3", 00:08:54.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.200 "is_configured": false, 00:08:54.200 "data_offset": 0, 00:08:54.200 "data_size": 0 00:08:54.200 } 00:08:54.200 ] 00:08:54.200 }' 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.200 03:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.460 BaseBdev3 00:08:54.460 [2024-11-18 03:57:51.088157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.460 [2024-11-18 03:57:51.088484] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:54.460 [2024-11-18 03:57:51.088511] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.460 [2024-11-18 03:57:51.088824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:54.460 [2024-11-18 03:57:51.089002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:54.460 [2024-11-18 03:57:51.089011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:54.460 [2024-11-18 03:57:51.089179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.460 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.720 [ 00:08:54.720 { 00:08:54.720 "name": "BaseBdev3", 00:08:54.720 "aliases": [ 00:08:54.720 "3bef7dfb-0310-4c57-a8eb-c2ef4fb9ec18" 00:08:54.720 ], 00:08:54.720 "product_name": "Malloc disk", 00:08:54.720 "block_size": 512, 00:08:54.720 "num_blocks": 65536, 00:08:54.720 "uuid": "3bef7dfb-0310-4c57-a8eb-c2ef4fb9ec18", 00:08:54.720 "assigned_rate_limits": { 00:08:54.720 "rw_ios_per_sec": 0, 00:08:54.720 "rw_mbytes_per_sec": 0, 00:08:54.720 "r_mbytes_per_sec": 0, 00:08:54.720 "w_mbytes_per_sec": 0 00:08:54.720 }, 00:08:54.720 "claimed": true, 00:08:54.720 "claim_type": "exclusive_write", 00:08:54.720 "zoned": false, 00:08:54.720 "supported_io_types": { 00:08:54.720 "read": true, 00:08:54.720 "write": true, 00:08:54.720 "unmap": true, 00:08:54.720 "flush": true, 00:08:54.720 "reset": true, 00:08:54.720 "nvme_admin": false, 00:08:54.720 "nvme_io": false, 00:08:54.720 "nvme_io_md": false, 00:08:54.720 "write_zeroes": true, 00:08:54.720 "zcopy": true, 00:08:54.720 "get_zone_info": false, 00:08:54.720 "zone_management": false, 00:08:54.720 "zone_append": false, 00:08:54.720 "compare": false, 00:08:54.720 "compare_and_write": false, 00:08:54.720 "abort": true, 00:08:54.720 "seek_hole": false, 00:08:54.720 "seek_data": false, 00:08:54.720 "copy": true, 00:08:54.720 "nvme_iov_md": false 00:08:54.720 }, 00:08:54.720 "memory_domains": [ 00:08:54.720 { 00:08:54.720 "dma_device_id": "system", 00:08:54.720 "dma_device_type": 1 00:08:54.720 }, 00:08:54.720 { 00:08:54.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.720 "dma_device_type": 2 00:08:54.720 } 00:08:54.720 ], 00:08:54.720 "driver_specific": {} 00:08:54.720 } 00:08:54.720 ] 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.720 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.720 "name": "Existed_Raid", 00:08:54.720 "uuid": "adbf1c0a-043f-4570-bd75-1b49eccaac48", 00:08:54.720 "strip_size_kb": 64, 00:08:54.720 "state": "online", 00:08:54.720 "raid_level": "raid0", 00:08:54.720 "superblock": true, 00:08:54.720 "num_base_bdevs": 3, 00:08:54.720 "num_base_bdevs_discovered": 3, 00:08:54.720 "num_base_bdevs_operational": 3, 00:08:54.721 "base_bdevs_list": [ 00:08:54.721 { 00:08:54.721 "name": "BaseBdev1", 00:08:54.721 "uuid": "5a87abb7-8c91-4e34-ad2b-dde2ffbd51f2", 00:08:54.721 "is_configured": true, 00:08:54.721 "data_offset": 2048, 00:08:54.721 "data_size": 63488 00:08:54.721 }, 00:08:54.721 { 00:08:54.721 "name": "BaseBdev2", 00:08:54.721 "uuid": "b855df63-f4c5-4e19-948b-e709b33fcadb", 00:08:54.721 "is_configured": true, 00:08:54.721 "data_offset": 2048, 00:08:54.721 "data_size": 63488 00:08:54.721 }, 00:08:54.721 { 00:08:54.721 "name": "BaseBdev3", 00:08:54.721 "uuid": "3bef7dfb-0310-4c57-a8eb-c2ef4fb9ec18", 00:08:54.721 "is_configured": true, 00:08:54.721 "data_offset": 2048, 00:08:54.721 "data_size": 63488 00:08:54.721 } 00:08:54.721 ] 00:08:54.721 }' 00:08:54.721 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.721 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.981 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:54.981 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:54.981 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.981 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.981 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.981 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.981 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:54.981 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.981 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.981 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.981 [2024-11-18 03:57:51.583909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.981 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.241 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.241 "name": "Existed_Raid", 00:08:55.241 "aliases": [ 00:08:55.241 "adbf1c0a-043f-4570-bd75-1b49eccaac48" 00:08:55.241 ], 00:08:55.241 "product_name": "Raid Volume", 00:08:55.241 "block_size": 512, 00:08:55.241 "num_blocks": 190464, 00:08:55.241 "uuid": "adbf1c0a-043f-4570-bd75-1b49eccaac48", 00:08:55.241 "assigned_rate_limits": { 00:08:55.241 "rw_ios_per_sec": 0, 00:08:55.241 "rw_mbytes_per_sec": 0, 00:08:55.241 "r_mbytes_per_sec": 0, 00:08:55.241 "w_mbytes_per_sec": 0 00:08:55.241 }, 00:08:55.241 "claimed": false, 00:08:55.241 "zoned": false, 00:08:55.241 "supported_io_types": { 00:08:55.241 "read": true, 00:08:55.241 "write": true, 00:08:55.241 "unmap": true, 00:08:55.241 "flush": true, 00:08:55.241 "reset": true, 00:08:55.241 "nvme_admin": false, 00:08:55.241 "nvme_io": false, 00:08:55.241 "nvme_io_md": false, 00:08:55.241 "write_zeroes": true, 00:08:55.241 "zcopy": false, 00:08:55.241 "get_zone_info": false, 00:08:55.241 "zone_management": false, 00:08:55.241 "zone_append": false, 00:08:55.241 "compare": false, 00:08:55.241 "compare_and_write": false, 00:08:55.241 "abort": false, 00:08:55.241 "seek_hole": false, 00:08:55.241 "seek_data": false, 00:08:55.241 "copy": false, 00:08:55.241 "nvme_iov_md": false 00:08:55.241 }, 00:08:55.241 "memory_domains": [ 00:08:55.241 { 00:08:55.241 "dma_device_id": "system", 00:08:55.241 "dma_device_type": 1 00:08:55.241 }, 00:08:55.241 { 00:08:55.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.241 "dma_device_type": 2 00:08:55.241 }, 00:08:55.241 { 00:08:55.241 "dma_device_id": "system", 00:08:55.241 "dma_device_type": 1 00:08:55.241 }, 00:08:55.241 { 00:08:55.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.241 "dma_device_type": 2 00:08:55.241 }, 00:08:55.241 { 00:08:55.241 "dma_device_id": "system", 00:08:55.241 "dma_device_type": 1 00:08:55.241 }, 00:08:55.241 { 00:08:55.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.241 "dma_device_type": 2 00:08:55.241 } 00:08:55.241 ], 00:08:55.241 "driver_specific": { 00:08:55.241 "raid": { 00:08:55.241 "uuid": "adbf1c0a-043f-4570-bd75-1b49eccaac48", 00:08:55.241 "strip_size_kb": 64, 00:08:55.241 "state": "online", 00:08:55.241 "raid_level": "raid0", 00:08:55.241 "superblock": true, 00:08:55.241 "num_base_bdevs": 3, 00:08:55.241 "num_base_bdevs_discovered": 3, 00:08:55.241 "num_base_bdevs_operational": 3, 00:08:55.241 "base_bdevs_list": [ 00:08:55.241 { 00:08:55.241 "name": "BaseBdev1", 00:08:55.241 "uuid": "5a87abb7-8c91-4e34-ad2b-dde2ffbd51f2", 00:08:55.241 "is_configured": true, 00:08:55.241 "data_offset": 2048, 00:08:55.241 "data_size": 63488 00:08:55.241 }, 00:08:55.241 { 00:08:55.241 "name": "BaseBdev2", 00:08:55.241 "uuid": "b855df63-f4c5-4e19-948b-e709b33fcadb", 00:08:55.241 "is_configured": true, 00:08:55.241 "data_offset": 2048, 00:08:55.241 "data_size": 63488 00:08:55.241 }, 00:08:55.241 { 00:08:55.241 "name": "BaseBdev3", 00:08:55.241 "uuid": "3bef7dfb-0310-4c57-a8eb-c2ef4fb9ec18", 00:08:55.241 "is_configured": true, 00:08:55.241 "data_offset": 2048, 00:08:55.241 "data_size": 63488 00:08:55.241 } 00:08:55.241 ] 00:08:55.241 } 00:08:55.241 } 00:08:55.241 }' 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:55.242 BaseBdev2 00:08:55.242 BaseBdev3' 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.242 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.242 [2024-11-18 03:57:51.875053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.242 [2024-11-18 03:57:51.875097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.242 [2024-11-18 03:57:51.875155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.502 03:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.502 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.502 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.502 "name": "Existed_Raid", 00:08:55.502 "uuid": "adbf1c0a-043f-4570-bd75-1b49eccaac48", 00:08:55.502 "strip_size_kb": 64, 00:08:55.502 "state": "offline", 00:08:55.502 "raid_level": "raid0", 00:08:55.502 "superblock": true, 00:08:55.502 "num_base_bdevs": 3, 00:08:55.502 "num_base_bdevs_discovered": 2, 00:08:55.502 "num_base_bdevs_operational": 2, 00:08:55.502 "base_bdevs_list": [ 00:08:55.502 { 00:08:55.502 "name": null, 00:08:55.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.502 "is_configured": false, 00:08:55.502 "data_offset": 0, 00:08:55.502 "data_size": 63488 00:08:55.502 }, 00:08:55.502 { 00:08:55.502 "name": "BaseBdev2", 00:08:55.502 "uuid": "b855df63-f4c5-4e19-948b-e709b33fcadb", 00:08:55.502 "is_configured": true, 00:08:55.502 "data_offset": 2048, 00:08:55.502 "data_size": 63488 00:08:55.502 }, 00:08:55.502 { 00:08:55.502 "name": "BaseBdev3", 00:08:55.502 "uuid": "3bef7dfb-0310-4c57-a8eb-c2ef4fb9ec18", 00:08:55.502 "is_configured": true, 00:08:55.502 "data_offset": 2048, 00:08:55.502 "data_size": 63488 00:08:55.502 } 00:08:55.502 ] 00:08:55.502 }' 00:08:55.502 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.502 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.073 [2024-11-18 03:57:52.457385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.073 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.073 [2024-11-18 03:57:52.619804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:56.073 [2024-11-18 03:57:52.619879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.333 BaseBdev2 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.333 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.334 [ 00:08:56.334 { 00:08:56.334 "name": "BaseBdev2", 00:08:56.334 "aliases": [ 00:08:56.334 "ddbe28e5-e5d1-4600-b782-4c8a123872ba" 00:08:56.334 ], 00:08:56.334 "product_name": "Malloc disk", 00:08:56.334 "block_size": 512, 00:08:56.334 "num_blocks": 65536, 00:08:56.334 "uuid": "ddbe28e5-e5d1-4600-b782-4c8a123872ba", 00:08:56.334 "assigned_rate_limits": { 00:08:56.334 "rw_ios_per_sec": 0, 00:08:56.334 "rw_mbytes_per_sec": 0, 00:08:56.334 "r_mbytes_per_sec": 0, 00:08:56.334 "w_mbytes_per_sec": 0 00:08:56.334 }, 00:08:56.334 "claimed": false, 00:08:56.334 "zoned": false, 00:08:56.334 "supported_io_types": { 00:08:56.334 "read": true, 00:08:56.334 "write": true, 00:08:56.334 "unmap": true, 00:08:56.334 "flush": true, 00:08:56.334 "reset": true, 00:08:56.334 "nvme_admin": false, 00:08:56.334 "nvme_io": false, 00:08:56.334 "nvme_io_md": false, 00:08:56.334 "write_zeroes": true, 00:08:56.334 "zcopy": true, 00:08:56.334 "get_zone_info": false, 00:08:56.334 "zone_management": false, 00:08:56.334 "zone_append": false, 00:08:56.334 "compare": false, 00:08:56.334 "compare_and_write": false, 00:08:56.334 "abort": true, 00:08:56.334 "seek_hole": false, 00:08:56.334 "seek_data": false, 00:08:56.334 "copy": true, 00:08:56.334 "nvme_iov_md": false 00:08:56.334 }, 00:08:56.334 "memory_domains": [ 00:08:56.334 { 00:08:56.334 "dma_device_id": "system", 00:08:56.334 "dma_device_type": 1 00:08:56.334 }, 00:08:56.334 { 00:08:56.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.334 "dma_device_type": 2 00:08:56.334 } 00:08:56.334 ], 00:08:56.334 "driver_specific": {} 00:08:56.334 } 00:08:56.334 ] 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.334 BaseBdev3 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.334 [ 00:08:56.334 { 00:08:56.334 "name": "BaseBdev3", 00:08:56.334 "aliases": [ 00:08:56.334 "4bde12b5-c966-4989-8749-2a033b177459" 00:08:56.334 ], 00:08:56.334 "product_name": "Malloc disk", 00:08:56.334 "block_size": 512, 00:08:56.334 "num_blocks": 65536, 00:08:56.334 "uuid": "4bde12b5-c966-4989-8749-2a033b177459", 00:08:56.334 "assigned_rate_limits": { 00:08:56.334 "rw_ios_per_sec": 0, 00:08:56.334 "rw_mbytes_per_sec": 0, 00:08:56.334 "r_mbytes_per_sec": 0, 00:08:56.334 "w_mbytes_per_sec": 0 00:08:56.334 }, 00:08:56.334 "claimed": false, 00:08:56.334 "zoned": false, 00:08:56.334 "supported_io_types": { 00:08:56.334 "read": true, 00:08:56.334 "write": true, 00:08:56.334 "unmap": true, 00:08:56.334 "flush": true, 00:08:56.334 "reset": true, 00:08:56.334 "nvme_admin": false, 00:08:56.334 "nvme_io": false, 00:08:56.334 "nvme_io_md": false, 00:08:56.334 "write_zeroes": true, 00:08:56.334 "zcopy": true, 00:08:56.334 "get_zone_info": false, 00:08:56.334 "zone_management": false, 00:08:56.334 "zone_append": false, 00:08:56.334 "compare": false, 00:08:56.334 "compare_and_write": false, 00:08:56.334 "abort": true, 00:08:56.334 "seek_hole": false, 00:08:56.334 "seek_data": false, 00:08:56.334 "copy": true, 00:08:56.334 "nvme_iov_md": false 00:08:56.334 }, 00:08:56.334 "memory_domains": [ 00:08:56.334 { 00:08:56.334 "dma_device_id": "system", 00:08:56.334 "dma_device_type": 1 00:08:56.334 }, 00:08:56.334 { 00:08:56.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.334 "dma_device_type": 2 00:08:56.334 } 00:08:56.334 ], 00:08:56.334 "driver_specific": {} 00:08:56.334 } 00:08:56.334 ] 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.334 [2024-11-18 03:57:52.945634] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.334 [2024-11-18 03:57:52.945772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.334 [2024-11-18 03:57:52.945815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:56.334 [2024-11-18 03:57:52.947965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.334 03:57:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.594 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.594 "name": "Existed_Raid", 00:08:56.594 "uuid": "8a2056e7-6287-4b33-9c88-80cb02adaa1b", 00:08:56.594 "strip_size_kb": 64, 00:08:56.594 "state": "configuring", 00:08:56.594 "raid_level": "raid0", 00:08:56.594 "superblock": true, 00:08:56.594 "num_base_bdevs": 3, 00:08:56.594 "num_base_bdevs_discovered": 2, 00:08:56.594 "num_base_bdevs_operational": 3, 00:08:56.594 "base_bdevs_list": [ 00:08:56.594 { 00:08:56.594 "name": "BaseBdev1", 00:08:56.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.594 "is_configured": false, 00:08:56.594 "data_offset": 0, 00:08:56.595 "data_size": 0 00:08:56.595 }, 00:08:56.595 { 00:08:56.595 "name": "BaseBdev2", 00:08:56.595 "uuid": "ddbe28e5-e5d1-4600-b782-4c8a123872ba", 00:08:56.595 "is_configured": true, 00:08:56.595 "data_offset": 2048, 00:08:56.595 "data_size": 63488 00:08:56.595 }, 00:08:56.595 { 00:08:56.595 "name": "BaseBdev3", 00:08:56.595 "uuid": "4bde12b5-c966-4989-8749-2a033b177459", 00:08:56.595 "is_configured": true, 00:08:56.595 "data_offset": 2048, 00:08:56.595 "data_size": 63488 00:08:56.595 } 00:08:56.595 ] 00:08:56.595 }' 00:08:56.595 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.595 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.866 [2024-11-18 03:57:53.396931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.866 "name": "Existed_Raid", 00:08:56.866 "uuid": "8a2056e7-6287-4b33-9c88-80cb02adaa1b", 00:08:56.866 "strip_size_kb": 64, 00:08:56.866 "state": "configuring", 00:08:56.866 "raid_level": "raid0", 00:08:56.866 "superblock": true, 00:08:56.866 "num_base_bdevs": 3, 00:08:56.866 "num_base_bdevs_discovered": 1, 00:08:56.866 "num_base_bdevs_operational": 3, 00:08:56.866 "base_bdevs_list": [ 00:08:56.866 { 00:08:56.866 "name": "BaseBdev1", 00:08:56.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.866 "is_configured": false, 00:08:56.866 "data_offset": 0, 00:08:56.866 "data_size": 0 00:08:56.866 }, 00:08:56.866 { 00:08:56.866 "name": null, 00:08:56.866 "uuid": "ddbe28e5-e5d1-4600-b782-4c8a123872ba", 00:08:56.866 "is_configured": false, 00:08:56.866 "data_offset": 0, 00:08:56.866 "data_size": 63488 00:08:56.866 }, 00:08:56.866 { 00:08:56.866 "name": "BaseBdev3", 00:08:56.866 "uuid": "4bde12b5-c966-4989-8749-2a033b177459", 00:08:56.866 "is_configured": true, 00:08:56.866 "data_offset": 2048, 00:08:56.866 "data_size": 63488 00:08:56.866 } 00:08:56.866 ] 00:08:56.866 }' 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.866 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.450 [2024-11-18 03:57:53.893228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.450 BaseBdev1 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.450 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.450 [ 00:08:57.450 { 00:08:57.450 "name": "BaseBdev1", 00:08:57.450 "aliases": [ 00:08:57.450 "99a81927-5eb0-4a1c-a50e-8e0390a50683" 00:08:57.450 ], 00:08:57.450 "product_name": "Malloc disk", 00:08:57.450 "block_size": 512, 00:08:57.450 "num_blocks": 65536, 00:08:57.450 "uuid": "99a81927-5eb0-4a1c-a50e-8e0390a50683", 00:08:57.450 "assigned_rate_limits": { 00:08:57.450 "rw_ios_per_sec": 0, 00:08:57.450 "rw_mbytes_per_sec": 0, 00:08:57.450 "r_mbytes_per_sec": 0, 00:08:57.450 "w_mbytes_per_sec": 0 00:08:57.450 }, 00:08:57.450 "claimed": true, 00:08:57.450 "claim_type": "exclusive_write", 00:08:57.450 "zoned": false, 00:08:57.450 "supported_io_types": { 00:08:57.450 "read": true, 00:08:57.450 "write": true, 00:08:57.450 "unmap": true, 00:08:57.451 "flush": true, 00:08:57.451 "reset": true, 00:08:57.451 "nvme_admin": false, 00:08:57.451 "nvme_io": false, 00:08:57.451 "nvme_io_md": false, 00:08:57.451 "write_zeroes": true, 00:08:57.451 "zcopy": true, 00:08:57.451 "get_zone_info": false, 00:08:57.451 "zone_management": false, 00:08:57.451 "zone_append": false, 00:08:57.451 "compare": false, 00:08:57.451 "compare_and_write": false, 00:08:57.451 "abort": true, 00:08:57.451 "seek_hole": false, 00:08:57.451 "seek_data": false, 00:08:57.451 "copy": true, 00:08:57.451 "nvme_iov_md": false 00:08:57.451 }, 00:08:57.451 "memory_domains": [ 00:08:57.451 { 00:08:57.451 "dma_device_id": "system", 00:08:57.451 "dma_device_type": 1 00:08:57.451 }, 00:08:57.451 { 00:08:57.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.451 "dma_device_type": 2 00:08:57.451 } 00:08:57.451 ], 00:08:57.451 "driver_specific": {} 00:08:57.451 } 00:08:57.451 ] 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.451 "name": "Existed_Raid", 00:08:57.451 "uuid": "8a2056e7-6287-4b33-9c88-80cb02adaa1b", 00:08:57.451 "strip_size_kb": 64, 00:08:57.451 "state": "configuring", 00:08:57.451 "raid_level": "raid0", 00:08:57.451 "superblock": true, 00:08:57.451 "num_base_bdevs": 3, 00:08:57.451 "num_base_bdevs_discovered": 2, 00:08:57.451 "num_base_bdevs_operational": 3, 00:08:57.451 "base_bdevs_list": [ 00:08:57.451 { 00:08:57.451 "name": "BaseBdev1", 00:08:57.451 "uuid": "99a81927-5eb0-4a1c-a50e-8e0390a50683", 00:08:57.451 "is_configured": true, 00:08:57.451 "data_offset": 2048, 00:08:57.451 "data_size": 63488 00:08:57.451 }, 00:08:57.451 { 00:08:57.451 "name": null, 00:08:57.451 "uuid": "ddbe28e5-e5d1-4600-b782-4c8a123872ba", 00:08:57.451 "is_configured": false, 00:08:57.451 "data_offset": 0, 00:08:57.451 "data_size": 63488 00:08:57.451 }, 00:08:57.451 { 00:08:57.451 "name": "BaseBdev3", 00:08:57.451 "uuid": "4bde12b5-c966-4989-8749-2a033b177459", 00:08:57.451 "is_configured": true, 00:08:57.451 "data_offset": 2048, 00:08:57.451 "data_size": 63488 00:08:57.451 } 00:08:57.451 ] 00:08:57.451 }' 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.451 03:57:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.021 [2024-11-18 03:57:54.428418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.021 "name": "Existed_Raid", 00:08:58.021 "uuid": "8a2056e7-6287-4b33-9c88-80cb02adaa1b", 00:08:58.021 "strip_size_kb": 64, 00:08:58.021 "state": "configuring", 00:08:58.021 "raid_level": "raid0", 00:08:58.021 "superblock": true, 00:08:58.021 "num_base_bdevs": 3, 00:08:58.021 "num_base_bdevs_discovered": 1, 00:08:58.021 "num_base_bdevs_operational": 3, 00:08:58.021 "base_bdevs_list": [ 00:08:58.021 { 00:08:58.021 "name": "BaseBdev1", 00:08:58.021 "uuid": "99a81927-5eb0-4a1c-a50e-8e0390a50683", 00:08:58.021 "is_configured": true, 00:08:58.021 "data_offset": 2048, 00:08:58.021 "data_size": 63488 00:08:58.021 }, 00:08:58.021 { 00:08:58.021 "name": null, 00:08:58.021 "uuid": "ddbe28e5-e5d1-4600-b782-4c8a123872ba", 00:08:58.021 "is_configured": false, 00:08:58.021 "data_offset": 0, 00:08:58.021 "data_size": 63488 00:08:58.021 }, 00:08:58.021 { 00:08:58.021 "name": null, 00:08:58.021 "uuid": "4bde12b5-c966-4989-8749-2a033b177459", 00:08:58.021 "is_configured": false, 00:08:58.021 "data_offset": 0, 00:08:58.021 "data_size": 63488 00:08:58.021 } 00:08:58.021 ] 00:08:58.021 }' 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.021 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.282 [2024-11-18 03:57:54.883675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.282 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.542 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.542 "name": "Existed_Raid", 00:08:58.542 "uuid": "8a2056e7-6287-4b33-9c88-80cb02adaa1b", 00:08:58.542 "strip_size_kb": 64, 00:08:58.542 "state": "configuring", 00:08:58.542 "raid_level": "raid0", 00:08:58.542 "superblock": true, 00:08:58.542 "num_base_bdevs": 3, 00:08:58.542 "num_base_bdevs_discovered": 2, 00:08:58.542 "num_base_bdevs_operational": 3, 00:08:58.542 "base_bdevs_list": [ 00:08:58.542 { 00:08:58.542 "name": "BaseBdev1", 00:08:58.542 "uuid": "99a81927-5eb0-4a1c-a50e-8e0390a50683", 00:08:58.542 "is_configured": true, 00:08:58.542 "data_offset": 2048, 00:08:58.542 "data_size": 63488 00:08:58.542 }, 00:08:58.542 { 00:08:58.542 "name": null, 00:08:58.542 "uuid": "ddbe28e5-e5d1-4600-b782-4c8a123872ba", 00:08:58.542 "is_configured": false, 00:08:58.542 "data_offset": 0, 00:08:58.542 "data_size": 63488 00:08:58.542 }, 00:08:58.542 { 00:08:58.542 "name": "BaseBdev3", 00:08:58.542 "uuid": "4bde12b5-c966-4989-8749-2a033b177459", 00:08:58.542 "is_configured": true, 00:08:58.542 "data_offset": 2048, 00:08:58.542 "data_size": 63488 00:08:58.542 } 00:08:58.542 ] 00:08:58.542 }' 00:08:58.542 03:57:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.542 03:57:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.803 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.803 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:58.803 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.803 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.803 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.803 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:58.803 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:58.803 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.803 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.803 [2024-11-18 03:57:55.351025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.062 "name": "Existed_Raid", 00:08:59.062 "uuid": "8a2056e7-6287-4b33-9c88-80cb02adaa1b", 00:08:59.062 "strip_size_kb": 64, 00:08:59.062 "state": "configuring", 00:08:59.062 "raid_level": "raid0", 00:08:59.062 "superblock": true, 00:08:59.062 "num_base_bdevs": 3, 00:08:59.062 "num_base_bdevs_discovered": 1, 00:08:59.062 "num_base_bdevs_operational": 3, 00:08:59.062 "base_bdevs_list": [ 00:08:59.062 { 00:08:59.062 "name": null, 00:08:59.062 "uuid": "99a81927-5eb0-4a1c-a50e-8e0390a50683", 00:08:59.062 "is_configured": false, 00:08:59.062 "data_offset": 0, 00:08:59.062 "data_size": 63488 00:08:59.062 }, 00:08:59.062 { 00:08:59.062 "name": null, 00:08:59.062 "uuid": "ddbe28e5-e5d1-4600-b782-4c8a123872ba", 00:08:59.062 "is_configured": false, 00:08:59.062 "data_offset": 0, 00:08:59.062 "data_size": 63488 00:08:59.062 }, 00:08:59.062 { 00:08:59.062 "name": "BaseBdev3", 00:08:59.062 "uuid": "4bde12b5-c966-4989-8749-2a033b177459", 00:08:59.062 "is_configured": true, 00:08:59.062 "data_offset": 2048, 00:08:59.062 "data_size": 63488 00:08:59.062 } 00:08:59.062 ] 00:08:59.062 }' 00:08:59.062 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.063 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.322 [2024-11-18 03:57:55.910421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.322 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.323 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.323 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.323 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.323 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.323 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.323 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.582 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.582 "name": "Existed_Raid", 00:08:59.582 "uuid": "8a2056e7-6287-4b33-9c88-80cb02adaa1b", 00:08:59.582 "strip_size_kb": 64, 00:08:59.582 "state": "configuring", 00:08:59.582 "raid_level": "raid0", 00:08:59.582 "superblock": true, 00:08:59.582 "num_base_bdevs": 3, 00:08:59.582 "num_base_bdevs_discovered": 2, 00:08:59.582 "num_base_bdevs_operational": 3, 00:08:59.582 "base_bdevs_list": [ 00:08:59.582 { 00:08:59.582 "name": null, 00:08:59.582 "uuid": "99a81927-5eb0-4a1c-a50e-8e0390a50683", 00:08:59.582 "is_configured": false, 00:08:59.582 "data_offset": 0, 00:08:59.582 "data_size": 63488 00:08:59.582 }, 00:08:59.582 { 00:08:59.582 "name": "BaseBdev2", 00:08:59.582 "uuid": "ddbe28e5-e5d1-4600-b782-4c8a123872ba", 00:08:59.582 "is_configured": true, 00:08:59.582 "data_offset": 2048, 00:08:59.582 "data_size": 63488 00:08:59.582 }, 00:08:59.582 { 00:08:59.582 "name": "BaseBdev3", 00:08:59.582 "uuid": "4bde12b5-c966-4989-8749-2a033b177459", 00:08:59.582 "is_configured": true, 00:08:59.582 "data_offset": 2048, 00:08:59.582 "data_size": 63488 00:08:59.582 } 00:08:59.582 ] 00:08:59.582 }' 00:08:59.582 03:57:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.582 03:57:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 99a81927-5eb0-4a1c-a50e-8e0390a50683 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.841 [2024-11-18 03:57:56.431046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:59.841 [2024-11-18 03:57:56.431406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:59.841 [2024-11-18 03:57:56.431467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.841 [2024-11-18 03:57:56.431755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:59.841 [2024-11-18 03:57:56.431965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:59.841 NewBaseBdev 00:08:59.841 [2024-11-18 03:57:56.432025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:59.841 [2024-11-18 03:57:56.432226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.841 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.842 [ 00:08:59.842 { 00:08:59.842 "name": "NewBaseBdev", 00:08:59.842 "aliases": [ 00:08:59.842 "99a81927-5eb0-4a1c-a50e-8e0390a50683" 00:08:59.842 ], 00:08:59.842 "product_name": "Malloc disk", 00:08:59.842 "block_size": 512, 00:08:59.842 "num_blocks": 65536, 00:08:59.842 "uuid": "99a81927-5eb0-4a1c-a50e-8e0390a50683", 00:08:59.842 "assigned_rate_limits": { 00:08:59.842 "rw_ios_per_sec": 0, 00:08:59.842 "rw_mbytes_per_sec": 0, 00:08:59.842 "r_mbytes_per_sec": 0, 00:08:59.842 "w_mbytes_per_sec": 0 00:08:59.842 }, 00:08:59.842 "claimed": true, 00:08:59.842 "claim_type": "exclusive_write", 00:08:59.842 "zoned": false, 00:08:59.842 "supported_io_types": { 00:08:59.842 "read": true, 00:08:59.842 "write": true, 00:08:59.842 "unmap": true, 00:08:59.842 "flush": true, 00:08:59.842 "reset": true, 00:08:59.842 "nvme_admin": false, 00:08:59.842 "nvme_io": false, 00:08:59.842 "nvme_io_md": false, 00:08:59.842 "write_zeroes": true, 00:08:59.842 "zcopy": true, 00:08:59.842 "get_zone_info": false, 00:08:59.842 "zone_management": false, 00:08:59.842 "zone_append": false, 00:08:59.842 "compare": false, 00:08:59.842 "compare_and_write": false, 00:08:59.842 "abort": true, 00:08:59.842 "seek_hole": false, 00:08:59.842 "seek_data": false, 00:08:59.842 "copy": true, 00:08:59.842 "nvme_iov_md": false 00:08:59.842 }, 00:08:59.842 "memory_domains": [ 00:08:59.842 { 00:08:59.842 "dma_device_id": "system", 00:08:59.842 "dma_device_type": 1 00:08:59.842 }, 00:08:59.842 { 00:08:59.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.842 "dma_device_type": 2 00:08:59.842 } 00:08:59.842 ], 00:08:59.842 "driver_specific": {} 00:08:59.842 } 00:08:59.842 ] 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.842 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.101 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.101 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.101 "name": "Existed_Raid", 00:09:00.101 "uuid": "8a2056e7-6287-4b33-9c88-80cb02adaa1b", 00:09:00.101 "strip_size_kb": 64, 00:09:00.101 "state": "online", 00:09:00.101 "raid_level": "raid0", 00:09:00.101 "superblock": true, 00:09:00.101 "num_base_bdevs": 3, 00:09:00.101 "num_base_bdevs_discovered": 3, 00:09:00.101 "num_base_bdevs_operational": 3, 00:09:00.101 "base_bdevs_list": [ 00:09:00.101 { 00:09:00.101 "name": "NewBaseBdev", 00:09:00.101 "uuid": "99a81927-5eb0-4a1c-a50e-8e0390a50683", 00:09:00.101 "is_configured": true, 00:09:00.101 "data_offset": 2048, 00:09:00.101 "data_size": 63488 00:09:00.101 }, 00:09:00.101 { 00:09:00.101 "name": "BaseBdev2", 00:09:00.101 "uuid": "ddbe28e5-e5d1-4600-b782-4c8a123872ba", 00:09:00.101 "is_configured": true, 00:09:00.101 "data_offset": 2048, 00:09:00.101 "data_size": 63488 00:09:00.101 }, 00:09:00.101 { 00:09:00.101 "name": "BaseBdev3", 00:09:00.101 "uuid": "4bde12b5-c966-4989-8749-2a033b177459", 00:09:00.101 "is_configured": true, 00:09:00.101 "data_offset": 2048, 00:09:00.101 "data_size": 63488 00:09:00.101 } 00:09:00.101 ] 00:09:00.101 }' 00:09:00.101 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.101 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.361 [2024-11-18 03:57:56.910596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.361 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.361 "name": "Existed_Raid", 00:09:00.361 "aliases": [ 00:09:00.361 "8a2056e7-6287-4b33-9c88-80cb02adaa1b" 00:09:00.361 ], 00:09:00.361 "product_name": "Raid Volume", 00:09:00.361 "block_size": 512, 00:09:00.361 "num_blocks": 190464, 00:09:00.361 "uuid": "8a2056e7-6287-4b33-9c88-80cb02adaa1b", 00:09:00.361 "assigned_rate_limits": { 00:09:00.361 "rw_ios_per_sec": 0, 00:09:00.361 "rw_mbytes_per_sec": 0, 00:09:00.361 "r_mbytes_per_sec": 0, 00:09:00.361 "w_mbytes_per_sec": 0 00:09:00.361 }, 00:09:00.361 "claimed": false, 00:09:00.361 "zoned": false, 00:09:00.361 "supported_io_types": { 00:09:00.361 "read": true, 00:09:00.361 "write": true, 00:09:00.361 "unmap": true, 00:09:00.361 "flush": true, 00:09:00.361 "reset": true, 00:09:00.361 "nvme_admin": false, 00:09:00.361 "nvme_io": false, 00:09:00.361 "nvme_io_md": false, 00:09:00.361 "write_zeroes": true, 00:09:00.361 "zcopy": false, 00:09:00.361 "get_zone_info": false, 00:09:00.361 "zone_management": false, 00:09:00.361 "zone_append": false, 00:09:00.361 "compare": false, 00:09:00.361 "compare_and_write": false, 00:09:00.361 "abort": false, 00:09:00.361 "seek_hole": false, 00:09:00.361 "seek_data": false, 00:09:00.361 "copy": false, 00:09:00.361 "nvme_iov_md": false 00:09:00.361 }, 00:09:00.361 "memory_domains": [ 00:09:00.361 { 00:09:00.361 "dma_device_id": "system", 00:09:00.361 "dma_device_type": 1 00:09:00.361 }, 00:09:00.361 { 00:09:00.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.361 "dma_device_type": 2 00:09:00.361 }, 00:09:00.361 { 00:09:00.361 "dma_device_id": "system", 00:09:00.361 "dma_device_type": 1 00:09:00.361 }, 00:09:00.361 { 00:09:00.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.361 "dma_device_type": 2 00:09:00.361 }, 00:09:00.361 { 00:09:00.361 "dma_device_id": "system", 00:09:00.361 "dma_device_type": 1 00:09:00.361 }, 00:09:00.361 { 00:09:00.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.361 "dma_device_type": 2 00:09:00.361 } 00:09:00.361 ], 00:09:00.361 "driver_specific": { 00:09:00.361 "raid": { 00:09:00.361 "uuid": "8a2056e7-6287-4b33-9c88-80cb02adaa1b", 00:09:00.361 "strip_size_kb": 64, 00:09:00.361 "state": "online", 00:09:00.361 "raid_level": "raid0", 00:09:00.361 "superblock": true, 00:09:00.361 "num_base_bdevs": 3, 00:09:00.361 "num_base_bdevs_discovered": 3, 00:09:00.361 "num_base_bdevs_operational": 3, 00:09:00.361 "base_bdevs_list": [ 00:09:00.361 { 00:09:00.361 "name": "NewBaseBdev", 00:09:00.361 "uuid": "99a81927-5eb0-4a1c-a50e-8e0390a50683", 00:09:00.361 "is_configured": true, 00:09:00.361 "data_offset": 2048, 00:09:00.361 "data_size": 63488 00:09:00.361 }, 00:09:00.361 { 00:09:00.361 "name": "BaseBdev2", 00:09:00.361 "uuid": "ddbe28e5-e5d1-4600-b782-4c8a123872ba", 00:09:00.361 "is_configured": true, 00:09:00.361 "data_offset": 2048, 00:09:00.361 "data_size": 63488 00:09:00.361 }, 00:09:00.361 { 00:09:00.361 "name": "BaseBdev3", 00:09:00.361 "uuid": "4bde12b5-c966-4989-8749-2a033b177459", 00:09:00.361 "is_configured": true, 00:09:00.361 "data_offset": 2048, 00:09:00.361 "data_size": 63488 00:09:00.362 } 00:09:00.362 ] 00:09:00.362 } 00:09:00.362 } 00:09:00.362 }' 00:09:00.362 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.362 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:00.362 BaseBdev2 00:09:00.362 BaseBdev3' 00:09:00.362 03:57:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.622 [2024-11-18 03:57:57.161868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.622 [2024-11-18 03:57:57.161945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.622 [2024-11-18 03:57:57.162053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.622 [2024-11-18 03:57:57.162147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.622 [2024-11-18 03:57:57.162249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64423 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64423 ']' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64423 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64423 00:09:00.622 killing process with pid 64423 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64423' 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64423 00:09:00.622 [2024-11-18 03:57:57.206311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.622 03:57:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64423 00:09:00.882 [2024-11-18 03:57:57.507596] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.264 ************************************ 00:09:02.264 END TEST raid_state_function_test_sb 00:09:02.264 ************************************ 00:09:02.264 03:57:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:02.264 00:09:02.264 real 0m10.494s 00:09:02.264 user 0m16.553s 00:09:02.264 sys 0m1.904s 00:09:02.264 03:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.264 03:57:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.264 03:57:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:02.264 03:57:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:02.264 03:57:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.264 03:57:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.264 ************************************ 00:09:02.264 START TEST raid_superblock_test 00:09:02.264 ************************************ 00:09:02.264 03:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65049 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65049 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65049 ']' 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.265 03:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.265 [2024-11-18 03:57:58.760738] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:02.265 [2024-11-18 03:57:58.760970] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65049 ] 00:09:02.525 [2024-11-18 03:57:58.914135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.525 [2024-11-18 03:57:59.021125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.783 [2024-11-18 03:57:59.221698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.783 [2024-11-18 03:57:59.221847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.043 malloc1 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.043 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.043 [2024-11-18 03:57:59.664243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:03.043 [2024-11-18 03:57:59.664320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.043 [2024-11-18 03:57:59.664347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:03.044 [2024-11-18 03:57:59.664358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.044 [2024-11-18 03:57:59.666476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.044 [2024-11-18 03:57:59.666517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:03.044 pt1 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.044 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.304 malloc2 00:09:03.304 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.304 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:03.304 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.304 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.305 [2024-11-18 03:57:59.717561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:03.305 [2024-11-18 03:57:59.717689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.305 [2024-11-18 03:57:59.717736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:03.305 [2024-11-18 03:57:59.717773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.305 [2024-11-18 03:57:59.719821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.305 [2024-11-18 03:57:59.719920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:03.305 pt2 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.305 malloc3 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.305 [2024-11-18 03:57:59.786427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:03.305 [2024-11-18 03:57:59.786536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.305 [2024-11-18 03:57:59.786582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:03.305 [2024-11-18 03:57:59.786621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.305 [2024-11-18 03:57:59.788780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.305 [2024-11-18 03:57:59.788898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:03.305 pt3 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.305 [2024-11-18 03:57:59.798480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:03.305 [2024-11-18 03:57:59.800408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:03.305 [2024-11-18 03:57:59.800553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:03.305 [2024-11-18 03:57:59.800758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:03.305 [2024-11-18 03:57:59.800820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:03.305 [2024-11-18 03:57:59.801123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:03.305 [2024-11-18 03:57:59.801356] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:03.305 [2024-11-18 03:57:59.801407] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:03.305 [2024-11-18 03:57:59.801619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.305 "name": "raid_bdev1", 00:09:03.305 "uuid": "4fbb519e-346d-4a88-9317-019f5adb7576", 00:09:03.305 "strip_size_kb": 64, 00:09:03.305 "state": "online", 00:09:03.305 "raid_level": "raid0", 00:09:03.305 "superblock": true, 00:09:03.305 "num_base_bdevs": 3, 00:09:03.305 "num_base_bdevs_discovered": 3, 00:09:03.305 "num_base_bdevs_operational": 3, 00:09:03.305 "base_bdevs_list": [ 00:09:03.305 { 00:09:03.305 "name": "pt1", 00:09:03.305 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.305 "is_configured": true, 00:09:03.305 "data_offset": 2048, 00:09:03.305 "data_size": 63488 00:09:03.305 }, 00:09:03.305 { 00:09:03.305 "name": "pt2", 00:09:03.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.305 "is_configured": true, 00:09:03.305 "data_offset": 2048, 00:09:03.305 "data_size": 63488 00:09:03.305 }, 00:09:03.305 { 00:09:03.305 "name": "pt3", 00:09:03.305 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:03.305 "is_configured": true, 00:09:03.305 "data_offset": 2048, 00:09:03.305 "data_size": 63488 00:09:03.305 } 00:09:03.305 ] 00:09:03.305 }' 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.305 03:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.875 [2024-11-18 03:58:00.282029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.875 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.875 "name": "raid_bdev1", 00:09:03.876 "aliases": [ 00:09:03.876 "4fbb519e-346d-4a88-9317-019f5adb7576" 00:09:03.876 ], 00:09:03.876 "product_name": "Raid Volume", 00:09:03.876 "block_size": 512, 00:09:03.876 "num_blocks": 190464, 00:09:03.876 "uuid": "4fbb519e-346d-4a88-9317-019f5adb7576", 00:09:03.876 "assigned_rate_limits": { 00:09:03.876 "rw_ios_per_sec": 0, 00:09:03.876 "rw_mbytes_per_sec": 0, 00:09:03.876 "r_mbytes_per_sec": 0, 00:09:03.876 "w_mbytes_per_sec": 0 00:09:03.876 }, 00:09:03.876 "claimed": false, 00:09:03.876 "zoned": false, 00:09:03.876 "supported_io_types": { 00:09:03.876 "read": true, 00:09:03.876 "write": true, 00:09:03.876 "unmap": true, 00:09:03.876 "flush": true, 00:09:03.876 "reset": true, 00:09:03.876 "nvme_admin": false, 00:09:03.876 "nvme_io": false, 00:09:03.876 "nvme_io_md": false, 00:09:03.876 "write_zeroes": true, 00:09:03.876 "zcopy": false, 00:09:03.876 "get_zone_info": false, 00:09:03.876 "zone_management": false, 00:09:03.876 "zone_append": false, 00:09:03.876 "compare": false, 00:09:03.876 "compare_and_write": false, 00:09:03.876 "abort": false, 00:09:03.876 "seek_hole": false, 00:09:03.876 "seek_data": false, 00:09:03.876 "copy": false, 00:09:03.876 "nvme_iov_md": false 00:09:03.876 }, 00:09:03.876 "memory_domains": [ 00:09:03.876 { 00:09:03.876 "dma_device_id": "system", 00:09:03.876 "dma_device_type": 1 00:09:03.876 }, 00:09:03.876 { 00:09:03.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.876 "dma_device_type": 2 00:09:03.876 }, 00:09:03.876 { 00:09:03.876 "dma_device_id": "system", 00:09:03.876 "dma_device_type": 1 00:09:03.876 }, 00:09:03.876 { 00:09:03.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.876 "dma_device_type": 2 00:09:03.876 }, 00:09:03.876 { 00:09:03.876 "dma_device_id": "system", 00:09:03.876 "dma_device_type": 1 00:09:03.876 }, 00:09:03.876 { 00:09:03.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.876 "dma_device_type": 2 00:09:03.876 } 00:09:03.876 ], 00:09:03.876 "driver_specific": { 00:09:03.876 "raid": { 00:09:03.876 "uuid": "4fbb519e-346d-4a88-9317-019f5adb7576", 00:09:03.876 "strip_size_kb": 64, 00:09:03.876 "state": "online", 00:09:03.876 "raid_level": "raid0", 00:09:03.876 "superblock": true, 00:09:03.876 "num_base_bdevs": 3, 00:09:03.876 "num_base_bdevs_discovered": 3, 00:09:03.876 "num_base_bdevs_operational": 3, 00:09:03.876 "base_bdevs_list": [ 00:09:03.876 { 00:09:03.876 "name": "pt1", 00:09:03.876 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:03.876 "is_configured": true, 00:09:03.876 "data_offset": 2048, 00:09:03.876 "data_size": 63488 00:09:03.876 }, 00:09:03.876 { 00:09:03.876 "name": "pt2", 00:09:03.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:03.876 "is_configured": true, 00:09:03.876 "data_offset": 2048, 00:09:03.876 "data_size": 63488 00:09:03.876 }, 00:09:03.876 { 00:09:03.876 "name": "pt3", 00:09:03.876 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:03.876 "is_configured": true, 00:09:03.876 "data_offset": 2048, 00:09:03.876 "data_size": 63488 00:09:03.876 } 00:09:03.876 ] 00:09:03.876 } 00:09:03.876 } 00:09:03.876 }' 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:03.876 pt2 00:09:03.876 pt3' 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.876 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.137 [2024-11-18 03:58:00.577443] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4fbb519e-346d-4a88-9317-019f5adb7576 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4fbb519e-346d-4a88-9317-019f5adb7576 ']' 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.137 [2024-11-18 03:58:00.621089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.137 [2024-11-18 03:58:00.621125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.137 [2024-11-18 03:58:00.621221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.137 [2024-11-18 03:58:00.621291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.137 [2024-11-18 03:58:00.621303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.137 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.138 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.138 [2024-11-18 03:58:00.768904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:04.138 [2024-11-18 03:58:00.770863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:04.138 [2024-11-18 03:58:00.770988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:04.138 [2024-11-18 03:58:00.771068] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:04.138 [2024-11-18 03:58:00.771133] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:04.138 [2024-11-18 03:58:00.771156] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:04.138 [2024-11-18 03:58:00.771176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.138 [2024-11-18 03:58:00.771189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:04.398 request: 00:09:04.398 { 00:09:04.398 "name": "raid_bdev1", 00:09:04.398 "raid_level": "raid0", 00:09:04.398 "base_bdevs": [ 00:09:04.398 "malloc1", 00:09:04.398 "malloc2", 00:09:04.398 "malloc3" 00:09:04.398 ], 00:09:04.398 "strip_size_kb": 64, 00:09:04.398 "superblock": false, 00:09:04.398 "method": "bdev_raid_create", 00:09:04.398 "req_id": 1 00:09:04.398 } 00:09:04.398 Got JSON-RPC error response 00:09:04.398 response: 00:09:04.398 { 00:09:04.398 "code": -17, 00:09:04.398 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:04.398 } 00:09:04.398 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:04.398 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:04.398 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.399 [2024-11-18 03:58:00.824721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:04.399 [2024-11-18 03:58:00.824854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.399 [2024-11-18 03:58:00.824899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:04.399 [2024-11-18 03:58:00.824942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.399 [2024-11-18 03:58:00.827143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.399 [2024-11-18 03:58:00.827225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:04.399 [2024-11-18 03:58:00.827358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:04.399 [2024-11-18 03:58:00.827447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:04.399 pt1 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.399 "name": "raid_bdev1", 00:09:04.399 "uuid": "4fbb519e-346d-4a88-9317-019f5adb7576", 00:09:04.399 "strip_size_kb": 64, 00:09:04.399 "state": "configuring", 00:09:04.399 "raid_level": "raid0", 00:09:04.399 "superblock": true, 00:09:04.399 "num_base_bdevs": 3, 00:09:04.399 "num_base_bdevs_discovered": 1, 00:09:04.399 "num_base_bdevs_operational": 3, 00:09:04.399 "base_bdevs_list": [ 00:09:04.399 { 00:09:04.399 "name": "pt1", 00:09:04.399 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.399 "is_configured": true, 00:09:04.399 "data_offset": 2048, 00:09:04.399 "data_size": 63488 00:09:04.399 }, 00:09:04.399 { 00:09:04.399 "name": null, 00:09:04.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.399 "is_configured": false, 00:09:04.399 "data_offset": 2048, 00:09:04.399 "data_size": 63488 00:09:04.399 }, 00:09:04.399 { 00:09:04.399 "name": null, 00:09:04.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.399 "is_configured": false, 00:09:04.399 "data_offset": 2048, 00:09:04.399 "data_size": 63488 00:09:04.399 } 00:09:04.399 ] 00:09:04.399 }' 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.399 03:58:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.659 [2024-11-18 03:58:01.256050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:04.659 [2024-11-18 03:58:01.256199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.659 [2024-11-18 03:58:01.256233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:04.659 [2024-11-18 03:58:01.256247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.659 [2024-11-18 03:58:01.256770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.659 [2024-11-18 03:58:01.256803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:04.659 [2024-11-18 03:58:01.256932] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:04.659 [2024-11-18 03:58:01.256957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:04.659 pt2 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.659 [2024-11-18 03:58:01.268004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.659 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.919 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.919 "name": "raid_bdev1", 00:09:04.919 "uuid": "4fbb519e-346d-4a88-9317-019f5adb7576", 00:09:04.919 "strip_size_kb": 64, 00:09:04.919 "state": "configuring", 00:09:04.919 "raid_level": "raid0", 00:09:04.919 "superblock": true, 00:09:04.919 "num_base_bdevs": 3, 00:09:04.919 "num_base_bdevs_discovered": 1, 00:09:04.919 "num_base_bdevs_operational": 3, 00:09:04.919 "base_bdevs_list": [ 00:09:04.919 { 00:09:04.919 "name": "pt1", 00:09:04.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:04.919 "is_configured": true, 00:09:04.919 "data_offset": 2048, 00:09:04.919 "data_size": 63488 00:09:04.919 }, 00:09:04.919 { 00:09:04.919 "name": null, 00:09:04.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:04.919 "is_configured": false, 00:09:04.919 "data_offset": 0, 00:09:04.919 "data_size": 63488 00:09:04.919 }, 00:09:04.919 { 00:09:04.919 "name": null, 00:09:04.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:04.919 "is_configured": false, 00:09:04.919 "data_offset": 2048, 00:09:04.919 "data_size": 63488 00:09:04.919 } 00:09:04.919 ] 00:09:04.919 }' 00:09:04.919 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.919 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.179 [2024-11-18 03:58:01.699497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:05.179 [2024-11-18 03:58:01.699655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.179 [2024-11-18 03:58:01.699697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:05.179 [2024-11-18 03:58:01.699712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.179 [2024-11-18 03:58:01.700241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.179 [2024-11-18 03:58:01.700273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:05.179 [2024-11-18 03:58:01.700372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:05.179 [2024-11-18 03:58:01.700399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.179 pt2 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.179 [2024-11-18 03:58:01.711419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:05.179 [2024-11-18 03:58:01.711475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.179 [2024-11-18 03:58:01.711490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:05.179 [2024-11-18 03:58:01.711502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.179 [2024-11-18 03:58:01.711931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.179 [2024-11-18 03:58:01.711962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:05.179 [2024-11-18 03:58:01.712032] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:05.179 [2024-11-18 03:58:01.712055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:05.179 [2024-11-18 03:58:01.712172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:05.179 [2024-11-18 03:58:01.712185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.179 [2024-11-18 03:58:01.712440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:05.179 [2024-11-18 03:58:01.712607] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:05.179 [2024-11-18 03:58:01.712616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:05.179 [2024-11-18 03:58:01.712752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.179 pt3 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.179 "name": "raid_bdev1", 00:09:05.179 "uuid": "4fbb519e-346d-4a88-9317-019f5adb7576", 00:09:05.179 "strip_size_kb": 64, 00:09:05.179 "state": "online", 00:09:05.179 "raid_level": "raid0", 00:09:05.179 "superblock": true, 00:09:05.179 "num_base_bdevs": 3, 00:09:05.179 "num_base_bdevs_discovered": 3, 00:09:05.179 "num_base_bdevs_operational": 3, 00:09:05.179 "base_bdevs_list": [ 00:09:05.179 { 00:09:05.179 "name": "pt1", 00:09:05.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.179 "is_configured": true, 00:09:05.179 "data_offset": 2048, 00:09:05.179 "data_size": 63488 00:09:05.179 }, 00:09:05.179 { 00:09:05.179 "name": "pt2", 00:09:05.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.179 "is_configured": true, 00:09:05.179 "data_offset": 2048, 00:09:05.179 "data_size": 63488 00:09:05.179 }, 00:09:05.179 { 00:09:05.179 "name": "pt3", 00:09:05.179 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.179 "is_configured": true, 00:09:05.179 "data_offset": 2048, 00:09:05.179 "data_size": 63488 00:09:05.179 } 00:09:05.179 ] 00:09:05.179 }' 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.179 03:58:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.750 [2024-11-18 03:58:02.127045] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.750 "name": "raid_bdev1", 00:09:05.750 "aliases": [ 00:09:05.750 "4fbb519e-346d-4a88-9317-019f5adb7576" 00:09:05.750 ], 00:09:05.750 "product_name": "Raid Volume", 00:09:05.750 "block_size": 512, 00:09:05.750 "num_blocks": 190464, 00:09:05.750 "uuid": "4fbb519e-346d-4a88-9317-019f5adb7576", 00:09:05.750 "assigned_rate_limits": { 00:09:05.750 "rw_ios_per_sec": 0, 00:09:05.750 "rw_mbytes_per_sec": 0, 00:09:05.750 "r_mbytes_per_sec": 0, 00:09:05.750 "w_mbytes_per_sec": 0 00:09:05.750 }, 00:09:05.750 "claimed": false, 00:09:05.750 "zoned": false, 00:09:05.750 "supported_io_types": { 00:09:05.750 "read": true, 00:09:05.750 "write": true, 00:09:05.750 "unmap": true, 00:09:05.750 "flush": true, 00:09:05.750 "reset": true, 00:09:05.750 "nvme_admin": false, 00:09:05.750 "nvme_io": false, 00:09:05.750 "nvme_io_md": false, 00:09:05.750 "write_zeroes": true, 00:09:05.750 "zcopy": false, 00:09:05.750 "get_zone_info": false, 00:09:05.750 "zone_management": false, 00:09:05.750 "zone_append": false, 00:09:05.750 "compare": false, 00:09:05.750 "compare_and_write": false, 00:09:05.750 "abort": false, 00:09:05.750 "seek_hole": false, 00:09:05.750 "seek_data": false, 00:09:05.750 "copy": false, 00:09:05.750 "nvme_iov_md": false 00:09:05.750 }, 00:09:05.750 "memory_domains": [ 00:09:05.750 { 00:09:05.750 "dma_device_id": "system", 00:09:05.750 "dma_device_type": 1 00:09:05.750 }, 00:09:05.750 { 00:09:05.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.750 "dma_device_type": 2 00:09:05.750 }, 00:09:05.750 { 00:09:05.750 "dma_device_id": "system", 00:09:05.750 "dma_device_type": 1 00:09:05.750 }, 00:09:05.750 { 00:09:05.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.750 "dma_device_type": 2 00:09:05.750 }, 00:09:05.750 { 00:09:05.750 "dma_device_id": "system", 00:09:05.750 "dma_device_type": 1 00:09:05.750 }, 00:09:05.750 { 00:09:05.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.750 "dma_device_type": 2 00:09:05.750 } 00:09:05.750 ], 00:09:05.750 "driver_specific": { 00:09:05.750 "raid": { 00:09:05.750 "uuid": "4fbb519e-346d-4a88-9317-019f5adb7576", 00:09:05.750 "strip_size_kb": 64, 00:09:05.750 "state": "online", 00:09:05.750 "raid_level": "raid0", 00:09:05.750 "superblock": true, 00:09:05.750 "num_base_bdevs": 3, 00:09:05.750 "num_base_bdevs_discovered": 3, 00:09:05.750 "num_base_bdevs_operational": 3, 00:09:05.750 "base_bdevs_list": [ 00:09:05.750 { 00:09:05.750 "name": "pt1", 00:09:05.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.750 "is_configured": true, 00:09:05.750 "data_offset": 2048, 00:09:05.750 "data_size": 63488 00:09:05.750 }, 00:09:05.750 { 00:09:05.750 "name": "pt2", 00:09:05.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.750 "is_configured": true, 00:09:05.750 "data_offset": 2048, 00:09:05.750 "data_size": 63488 00:09:05.750 }, 00:09:05.750 { 00:09:05.750 "name": "pt3", 00:09:05.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.750 "is_configured": true, 00:09:05.750 "data_offset": 2048, 00:09:05.750 "data_size": 63488 00:09:05.750 } 00:09:05.750 ] 00:09:05.750 } 00:09:05.750 } 00:09:05.750 }' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:05.750 pt2 00:09:05.750 pt3' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.750 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.750 [2024-11-18 03:58:02.378660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4fbb519e-346d-4a88-9317-019f5adb7576 '!=' 4fbb519e-346d-4a88-9317-019f5adb7576 ']' 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65049 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65049 ']' 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65049 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65049 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.011 killing process with pid 65049 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65049' 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65049 00:09:06.011 [2024-11-18 03:58:02.464181] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.011 [2024-11-18 03:58:02.464294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.011 03:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65049 00:09:06.011 [2024-11-18 03:58:02.464358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.011 [2024-11-18 03:58:02.464372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:06.270 [2024-11-18 03:58:02.765173] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.210 03:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:07.210 00:09:07.210 real 0m5.176s 00:09:07.210 user 0m7.419s 00:09:07.210 sys 0m0.901s 00:09:07.210 03:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.210 ************************************ 00:09:07.210 END TEST raid_superblock_test 00:09:07.210 ************************************ 00:09:07.210 03:58:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.470 03:58:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:07.470 03:58:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:07.470 03:58:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.470 03:58:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.470 ************************************ 00:09:07.470 START TEST raid_read_error_test 00:09:07.470 ************************************ 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.44r3ILqGSJ 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65302 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65302 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65302 ']' 00:09:07.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.470 03:58:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.470 [2024-11-18 03:58:04.016707] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:07.470 [2024-11-18 03:58:04.016924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65302 ] 00:09:07.730 [2024-11-18 03:58:04.190469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.730 [2024-11-18 03:58:04.300852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.989 [2024-11-18 03:58:04.493882] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.989 [2024-11-18 03:58:04.493952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.249 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.249 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:08.249 03:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:08.249 03:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:08.249 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.249 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.249 BaseBdev1_malloc 00:09:08.249 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.249 03:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:08.249 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.249 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.510 true 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.510 [2024-11-18 03:58:04.903169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:08.510 [2024-11-18 03:58:04.903231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.510 [2024-11-18 03:58:04.903252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:08.510 [2024-11-18 03:58:04.903265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.510 [2024-11-18 03:58:04.905382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.510 [2024-11-18 03:58:04.905430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:08.510 BaseBdev1 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.510 BaseBdev2_malloc 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.510 true 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.510 [2024-11-18 03:58:04.968165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:08.510 [2024-11-18 03:58:04.968226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.510 [2024-11-18 03:58:04.968244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:08.510 [2024-11-18 03:58:04.968256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.510 [2024-11-18 03:58:04.970342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.510 [2024-11-18 03:58:04.970438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:08.510 BaseBdev2 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.510 03:58:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.510 BaseBdev3_malloc 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.510 true 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.510 [2024-11-18 03:58:05.047924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:08.510 [2024-11-18 03:58:05.048042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.510 [2024-11-18 03:58:05.048066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:08.510 [2024-11-18 03:58:05.048079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.510 [2024-11-18 03:58:05.050129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.510 [2024-11-18 03:58:05.050187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:08.510 BaseBdev3 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.510 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.511 [2024-11-18 03:58:05.059992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.511 [2024-11-18 03:58:05.061750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.511 [2024-11-18 03:58:05.061852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.511 [2024-11-18 03:58:05.062054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:08.511 [2024-11-18 03:58:05.062069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:08.511 [2024-11-18 03:58:05.062309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:08.511 [2024-11-18 03:58:05.062466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:08.511 [2024-11-18 03:58:05.062481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:08.511 [2024-11-18 03:58:05.062619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.511 "name": "raid_bdev1", 00:09:08.511 "uuid": "4e0b06ee-760e-4673-8cd9-6172fc18ac05", 00:09:08.511 "strip_size_kb": 64, 00:09:08.511 "state": "online", 00:09:08.511 "raid_level": "raid0", 00:09:08.511 "superblock": true, 00:09:08.511 "num_base_bdevs": 3, 00:09:08.511 "num_base_bdevs_discovered": 3, 00:09:08.511 "num_base_bdevs_operational": 3, 00:09:08.511 "base_bdevs_list": [ 00:09:08.511 { 00:09:08.511 "name": "BaseBdev1", 00:09:08.511 "uuid": "f534e63c-d609-5453-8d78-d9770f9b8f7d", 00:09:08.511 "is_configured": true, 00:09:08.511 "data_offset": 2048, 00:09:08.511 "data_size": 63488 00:09:08.511 }, 00:09:08.511 { 00:09:08.511 "name": "BaseBdev2", 00:09:08.511 "uuid": "0c4b0c3a-fa21-5825-8a0d-e99762bd62ea", 00:09:08.511 "is_configured": true, 00:09:08.511 "data_offset": 2048, 00:09:08.511 "data_size": 63488 00:09:08.511 }, 00:09:08.511 { 00:09:08.511 "name": "BaseBdev3", 00:09:08.511 "uuid": "808c428d-18db-5868-be18-ab2e13a83b17", 00:09:08.511 "is_configured": true, 00:09:08.511 "data_offset": 2048, 00:09:08.511 "data_size": 63488 00:09:08.511 } 00:09:08.511 ] 00:09:08.511 }' 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.511 03:58:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.081 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:09.081 03:58:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:09.081 [2024-11-18 03:58:05.616521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.031 "name": "raid_bdev1", 00:09:10.031 "uuid": "4e0b06ee-760e-4673-8cd9-6172fc18ac05", 00:09:10.031 "strip_size_kb": 64, 00:09:10.031 "state": "online", 00:09:10.031 "raid_level": "raid0", 00:09:10.031 "superblock": true, 00:09:10.031 "num_base_bdevs": 3, 00:09:10.031 "num_base_bdevs_discovered": 3, 00:09:10.031 "num_base_bdevs_operational": 3, 00:09:10.031 "base_bdevs_list": [ 00:09:10.031 { 00:09:10.031 "name": "BaseBdev1", 00:09:10.031 "uuid": "f534e63c-d609-5453-8d78-d9770f9b8f7d", 00:09:10.031 "is_configured": true, 00:09:10.031 "data_offset": 2048, 00:09:10.031 "data_size": 63488 00:09:10.031 }, 00:09:10.031 { 00:09:10.031 "name": "BaseBdev2", 00:09:10.031 "uuid": "0c4b0c3a-fa21-5825-8a0d-e99762bd62ea", 00:09:10.031 "is_configured": true, 00:09:10.031 "data_offset": 2048, 00:09:10.031 "data_size": 63488 00:09:10.031 }, 00:09:10.031 { 00:09:10.031 "name": "BaseBdev3", 00:09:10.031 "uuid": "808c428d-18db-5868-be18-ab2e13a83b17", 00:09:10.031 "is_configured": true, 00:09:10.031 "data_offset": 2048, 00:09:10.031 "data_size": 63488 00:09:10.031 } 00:09:10.031 ] 00:09:10.031 }' 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.031 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.601 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:10.601 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.601 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.601 [2024-11-18 03:58:06.964428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:10.601 [2024-11-18 03:58:06.964535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.601 [2024-11-18 03:58:06.967110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.601 [2024-11-18 03:58:06.967201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.601 [2024-11-18 03:58:06.967265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.601 [2024-11-18 03:58:06.967352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:10.601 { 00:09:10.601 "results": [ 00:09:10.601 { 00:09:10.601 "job": "raid_bdev1", 00:09:10.601 "core_mask": "0x1", 00:09:10.601 "workload": "randrw", 00:09:10.601 "percentage": 50, 00:09:10.601 "status": "finished", 00:09:10.601 "queue_depth": 1, 00:09:10.601 "io_size": 131072, 00:09:10.601 "runtime": 1.348689, 00:09:10.601 "iops": 15469.837746137175, 00:09:10.601 "mibps": 1933.7297182671468, 00:09:10.601 "io_failed": 1, 00:09:10.601 "io_timeout": 0, 00:09:10.601 "avg_latency_us": 89.79378608794109, 00:09:10.601 "min_latency_us": 25.9353711790393, 00:09:10.601 "max_latency_us": 1466.6899563318777 00:09:10.601 } 00:09:10.601 ], 00:09:10.601 "core_count": 1 00:09:10.601 } 00:09:10.601 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.601 03:58:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65302 00:09:10.601 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65302 ']' 00:09:10.601 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65302 00:09:10.601 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:10.601 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.601 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65302 00:09:10.601 killing process with pid 65302 00:09:10.601 03:58:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.601 03:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.601 03:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65302' 00:09:10.601 03:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65302 00:09:10.601 [2024-11-18 03:58:07.002330] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:10.601 03:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65302 00:09:10.601 [2024-11-18 03:58:07.228333] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.984 03:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.44r3ILqGSJ 00:09:11.984 03:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:11.984 03:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:11.984 03:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:11.984 03:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:11.984 03:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.984 03:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.984 ************************************ 00:09:11.984 END TEST raid_read_error_test 00:09:11.984 ************************************ 00:09:11.984 03:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:11.984 00:09:11.984 real 0m4.449s 00:09:11.984 user 0m5.276s 00:09:11.984 sys 0m0.549s 00:09:11.984 03:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.984 03:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.984 03:58:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:11.984 03:58:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:11.984 03:58:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.984 03:58:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.984 ************************************ 00:09:11.984 START TEST raid_write_error_test 00:09:11.984 ************************************ 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:11.984 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iKdNMhZWoy 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65443 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65443 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65443 ']' 00:09:11.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.985 03:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.985 [2024-11-18 03:58:08.539958] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:11.985 [2024-11-18 03:58:08.540070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65443 ] 00:09:12.244 [2024-11-18 03:58:08.712383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.244 [2024-11-18 03:58:08.826492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.504 [2024-11-18 03:58:09.029784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.504 [2024-11-18 03:58:09.029864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.763 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.763 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.763 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.763 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:12.763 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.763 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.022 BaseBdev1_malloc 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.022 true 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.022 [2024-11-18 03:58:09.427946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:13.022 [2024-11-18 03:58:09.428057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.022 [2024-11-18 03:58:09.428082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:13.022 [2024-11-18 03:58:09.428095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.022 [2024-11-18 03:58:09.430243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.022 [2024-11-18 03:58:09.430288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:13.022 BaseBdev1 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.022 BaseBdev2_malloc 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.022 true 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.022 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.022 [2024-11-18 03:58:09.495558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:13.022 [2024-11-18 03:58:09.495623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.022 [2024-11-18 03:58:09.495658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:13.022 [2024-11-18 03:58:09.495670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.022 [2024-11-18 03:58:09.497768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.023 [2024-11-18 03:58:09.497817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:13.023 BaseBdev2 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.023 BaseBdev3_malloc 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.023 true 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.023 [2024-11-18 03:58:09.573399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:13.023 [2024-11-18 03:58:09.573455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.023 [2024-11-18 03:58:09.573474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:13.023 [2024-11-18 03:58:09.573485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.023 [2024-11-18 03:58:09.575625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.023 [2024-11-18 03:58:09.575714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:13.023 BaseBdev3 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.023 [2024-11-18 03:58:09.585505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.023 [2024-11-18 03:58:09.587471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.023 [2024-11-18 03:58:09.587565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.023 [2024-11-18 03:58:09.587776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:13.023 [2024-11-18 03:58:09.587792] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:13.023 [2024-11-18 03:58:09.588122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:13.023 [2024-11-18 03:58:09.588341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:13.023 [2024-11-18 03:58:09.588357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:13.023 [2024-11-18 03:58:09.588553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.023 "name": "raid_bdev1", 00:09:13.023 "uuid": "3a400761-ffef-42ac-ab33-d2b407d70464", 00:09:13.023 "strip_size_kb": 64, 00:09:13.023 "state": "online", 00:09:13.023 "raid_level": "raid0", 00:09:13.023 "superblock": true, 00:09:13.023 "num_base_bdevs": 3, 00:09:13.023 "num_base_bdevs_discovered": 3, 00:09:13.023 "num_base_bdevs_operational": 3, 00:09:13.023 "base_bdevs_list": [ 00:09:13.023 { 00:09:13.023 "name": "BaseBdev1", 00:09:13.023 "uuid": "731a7127-405f-594f-bcfb-8f89d7e21d29", 00:09:13.023 "is_configured": true, 00:09:13.023 "data_offset": 2048, 00:09:13.023 "data_size": 63488 00:09:13.023 }, 00:09:13.023 { 00:09:13.023 "name": "BaseBdev2", 00:09:13.023 "uuid": "37ad3654-779c-5f78-bd96-b852534ccc16", 00:09:13.023 "is_configured": true, 00:09:13.023 "data_offset": 2048, 00:09:13.023 "data_size": 63488 00:09:13.023 }, 00:09:13.023 { 00:09:13.023 "name": "BaseBdev3", 00:09:13.023 "uuid": "ca5ed164-e26d-5a2a-9142-c6d2c704fad5", 00:09:13.023 "is_configured": true, 00:09:13.023 "data_offset": 2048, 00:09:13.023 "data_size": 63488 00:09:13.023 } 00:09:13.023 ] 00:09:13.023 }' 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.023 03:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.590 03:58:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:13.590 03:58:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:13.590 [2024-11-18 03:58:10.137836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.529 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.529 "name": "raid_bdev1", 00:09:14.529 "uuid": "3a400761-ffef-42ac-ab33-d2b407d70464", 00:09:14.529 "strip_size_kb": 64, 00:09:14.529 "state": "online", 00:09:14.529 "raid_level": "raid0", 00:09:14.529 "superblock": true, 00:09:14.529 "num_base_bdevs": 3, 00:09:14.529 "num_base_bdevs_discovered": 3, 00:09:14.529 "num_base_bdevs_operational": 3, 00:09:14.529 "base_bdevs_list": [ 00:09:14.529 { 00:09:14.529 "name": "BaseBdev1", 00:09:14.530 "uuid": "731a7127-405f-594f-bcfb-8f89d7e21d29", 00:09:14.530 "is_configured": true, 00:09:14.530 "data_offset": 2048, 00:09:14.530 "data_size": 63488 00:09:14.530 }, 00:09:14.530 { 00:09:14.530 "name": "BaseBdev2", 00:09:14.530 "uuid": "37ad3654-779c-5f78-bd96-b852534ccc16", 00:09:14.530 "is_configured": true, 00:09:14.530 "data_offset": 2048, 00:09:14.530 "data_size": 63488 00:09:14.530 }, 00:09:14.530 { 00:09:14.530 "name": "BaseBdev3", 00:09:14.530 "uuid": "ca5ed164-e26d-5a2a-9142-c6d2c704fad5", 00:09:14.530 "is_configured": true, 00:09:14.530 "data_offset": 2048, 00:09:14.530 "data_size": 63488 00:09:14.530 } 00:09:14.530 ] 00:09:14.530 }' 00:09:14.530 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.530 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.098 [2024-11-18 03:58:11.536543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:15.098 [2024-11-18 03:58:11.536680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.098 [2024-11-18 03:58:11.539399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.098 [2024-11-18 03:58:11.539521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.098 [2024-11-18 03:58:11.539588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.098 [2024-11-18 03:58:11.539641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:15.098 { 00:09:15.098 "results": [ 00:09:15.098 { 00:09:15.098 "job": "raid_bdev1", 00:09:15.098 "core_mask": "0x1", 00:09:15.098 "workload": "randrw", 00:09:15.098 "percentage": 50, 00:09:15.098 "status": "finished", 00:09:15.098 "queue_depth": 1, 00:09:15.098 "io_size": 131072, 00:09:15.098 "runtime": 1.399602, 00:09:15.098 "iops": 15532.27274610925, 00:09:15.098 "mibps": 1941.5340932636564, 00:09:15.098 "io_failed": 1, 00:09:15.098 "io_timeout": 0, 00:09:15.098 "avg_latency_us": 89.48975482378086, 00:09:15.098 "min_latency_us": 22.581659388646287, 00:09:15.098 "max_latency_us": 1423.7624454148472 00:09:15.098 } 00:09:15.098 ], 00:09:15.098 "core_count": 1 00:09:15.098 } 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65443 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65443 ']' 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65443 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65443 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65443' 00:09:15.098 killing process with pid 65443 00:09:15.098 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65443 00:09:15.099 [2024-11-18 03:58:11.587562] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.099 03:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65443 00:09:15.360 [2024-11-18 03:58:11.804297] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.319 03:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iKdNMhZWoy 00:09:16.319 03:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:16.319 03:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:16.319 ************************************ 00:09:16.319 END TEST raid_write_error_test 00:09:16.319 ************************************ 00:09:16.319 03:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:16.319 03:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:16.319 03:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.319 03:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.319 03:58:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:16.319 00:09:16.319 real 0m4.502s 00:09:16.319 user 0m5.384s 00:09:16.319 sys 0m0.562s 00:09:16.319 03:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.319 03:58:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.578 03:58:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:16.578 03:58:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:16.578 03:58:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:16.578 03:58:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.578 03:58:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.578 ************************************ 00:09:16.578 START TEST raid_state_function_test 00:09:16.578 ************************************ 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65587 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65587' 00:09:16.578 Process raid pid: 65587 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65587 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65587 ']' 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.578 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.578 [2024-11-18 03:58:13.106994] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:16.578 [2024-11-18 03:58:13.107216] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.837 [2024-11-18 03:58:13.285910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.837 [2024-11-18 03:58:13.394460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.097 [2024-11-18 03:58:13.598655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.097 [2024-11-18 03:58:13.598692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.356 [2024-11-18 03:58:13.951002] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.356 [2024-11-18 03:58:13.951067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.356 [2024-11-18 03:58:13.951079] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.356 [2024-11-18 03:58:13.951090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.356 [2024-11-18 03:58:13.951098] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.356 [2024-11-18 03:58:13.951108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.356 03:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.617 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.617 "name": "Existed_Raid", 00:09:17.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.617 "strip_size_kb": 64, 00:09:17.617 "state": "configuring", 00:09:17.617 "raid_level": "concat", 00:09:17.617 "superblock": false, 00:09:17.617 "num_base_bdevs": 3, 00:09:17.617 "num_base_bdevs_discovered": 0, 00:09:17.617 "num_base_bdevs_operational": 3, 00:09:17.617 "base_bdevs_list": [ 00:09:17.617 { 00:09:17.617 "name": "BaseBdev1", 00:09:17.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.617 "is_configured": false, 00:09:17.617 "data_offset": 0, 00:09:17.617 "data_size": 0 00:09:17.617 }, 00:09:17.617 { 00:09:17.617 "name": "BaseBdev2", 00:09:17.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.617 "is_configured": false, 00:09:17.617 "data_offset": 0, 00:09:17.617 "data_size": 0 00:09:17.617 }, 00:09:17.617 { 00:09:17.617 "name": "BaseBdev3", 00:09:17.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.617 "is_configured": false, 00:09:17.617 "data_offset": 0, 00:09:17.617 "data_size": 0 00:09:17.617 } 00:09:17.617 ] 00:09:17.617 }' 00:09:17.617 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.617 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.877 [2024-11-18 03:58:14.422134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.877 [2024-11-18 03:58:14.422247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.877 [2024-11-18 03:58:14.434104] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.877 [2024-11-18 03:58:14.434204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.877 [2024-11-18 03:58:14.434237] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.877 [2024-11-18 03:58:14.434266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.877 [2024-11-18 03:58:14.434288] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.877 [2024-11-18 03:58:14.434314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.877 [2024-11-18 03:58:14.481127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.877 BaseBdev1 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.877 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.877 [ 00:09:17.877 { 00:09:17.877 "name": "BaseBdev1", 00:09:17.877 "aliases": [ 00:09:17.877 "108dc8d4-be83-46f1-8220-74c2e82f734f" 00:09:17.877 ], 00:09:17.877 "product_name": "Malloc disk", 00:09:17.877 "block_size": 512, 00:09:17.877 "num_blocks": 65536, 00:09:17.877 "uuid": "108dc8d4-be83-46f1-8220-74c2e82f734f", 00:09:17.877 "assigned_rate_limits": { 00:09:17.877 "rw_ios_per_sec": 0, 00:09:17.877 "rw_mbytes_per_sec": 0, 00:09:17.877 "r_mbytes_per_sec": 0, 00:09:17.877 "w_mbytes_per_sec": 0 00:09:17.877 }, 00:09:17.877 "claimed": true, 00:09:17.877 "claim_type": "exclusive_write", 00:09:17.877 "zoned": false, 00:09:17.877 "supported_io_types": { 00:09:17.877 "read": true, 00:09:17.877 "write": true, 00:09:17.877 "unmap": true, 00:09:17.877 "flush": true, 00:09:17.877 "reset": true, 00:09:17.877 "nvme_admin": false, 00:09:17.877 "nvme_io": false, 00:09:17.877 "nvme_io_md": false, 00:09:17.877 "write_zeroes": true, 00:09:17.877 "zcopy": true, 00:09:17.877 "get_zone_info": false, 00:09:17.877 "zone_management": false, 00:09:17.877 "zone_append": false, 00:09:17.877 "compare": false, 00:09:17.877 "compare_and_write": false, 00:09:17.877 "abort": true, 00:09:17.877 "seek_hole": false, 00:09:17.877 "seek_data": false, 00:09:17.877 "copy": true, 00:09:17.877 "nvme_iov_md": false 00:09:17.877 }, 00:09:17.877 "memory_domains": [ 00:09:17.877 { 00:09:17.877 "dma_device_id": "system", 00:09:17.877 "dma_device_type": 1 00:09:17.877 }, 00:09:17.877 { 00:09:17.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.877 "dma_device_type": 2 00:09:17.877 } 00:09:18.137 ], 00:09:18.137 "driver_specific": {} 00:09:18.137 } 00:09:18.137 ] 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.137 "name": "Existed_Raid", 00:09:18.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.137 "strip_size_kb": 64, 00:09:18.137 "state": "configuring", 00:09:18.137 "raid_level": "concat", 00:09:18.137 "superblock": false, 00:09:18.137 "num_base_bdevs": 3, 00:09:18.137 "num_base_bdevs_discovered": 1, 00:09:18.137 "num_base_bdevs_operational": 3, 00:09:18.137 "base_bdevs_list": [ 00:09:18.137 { 00:09:18.137 "name": "BaseBdev1", 00:09:18.137 "uuid": "108dc8d4-be83-46f1-8220-74c2e82f734f", 00:09:18.137 "is_configured": true, 00:09:18.137 "data_offset": 0, 00:09:18.137 "data_size": 65536 00:09:18.137 }, 00:09:18.137 { 00:09:18.137 "name": "BaseBdev2", 00:09:18.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.137 "is_configured": false, 00:09:18.137 "data_offset": 0, 00:09:18.137 "data_size": 0 00:09:18.137 }, 00:09:18.137 { 00:09:18.137 "name": "BaseBdev3", 00:09:18.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.137 "is_configured": false, 00:09:18.137 "data_offset": 0, 00:09:18.137 "data_size": 0 00:09:18.137 } 00:09:18.137 ] 00:09:18.137 }' 00:09:18.137 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.138 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.398 [2024-11-18 03:58:14.980335] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.398 [2024-11-18 03:58:14.980397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.398 [2024-11-18 03:58:14.992344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.398 [2024-11-18 03:58:14.994206] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.398 [2024-11-18 03:58:14.994255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.398 [2024-11-18 03:58:14.994268] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.398 [2024-11-18 03:58:14.994279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.398 03:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.398 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.398 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.398 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.398 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.398 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.658 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.658 "name": "Existed_Raid", 00:09:18.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.658 "strip_size_kb": 64, 00:09:18.658 "state": "configuring", 00:09:18.658 "raid_level": "concat", 00:09:18.658 "superblock": false, 00:09:18.658 "num_base_bdevs": 3, 00:09:18.658 "num_base_bdevs_discovered": 1, 00:09:18.658 "num_base_bdevs_operational": 3, 00:09:18.658 "base_bdevs_list": [ 00:09:18.658 { 00:09:18.658 "name": "BaseBdev1", 00:09:18.658 "uuid": "108dc8d4-be83-46f1-8220-74c2e82f734f", 00:09:18.658 "is_configured": true, 00:09:18.658 "data_offset": 0, 00:09:18.658 "data_size": 65536 00:09:18.658 }, 00:09:18.658 { 00:09:18.658 "name": "BaseBdev2", 00:09:18.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.658 "is_configured": false, 00:09:18.658 "data_offset": 0, 00:09:18.658 "data_size": 0 00:09:18.658 }, 00:09:18.658 { 00:09:18.658 "name": "BaseBdev3", 00:09:18.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.658 "is_configured": false, 00:09:18.658 "data_offset": 0, 00:09:18.658 "data_size": 0 00:09:18.658 } 00:09:18.658 ] 00:09:18.658 }' 00:09:18.658 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.658 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.919 [2024-11-18 03:58:15.484530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.919 BaseBdev2 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.919 [ 00:09:18.919 { 00:09:18.919 "name": "BaseBdev2", 00:09:18.919 "aliases": [ 00:09:18.919 "8a0c72d4-ac00-4931-8c82-95b79a10ce4d" 00:09:18.919 ], 00:09:18.919 "product_name": "Malloc disk", 00:09:18.919 "block_size": 512, 00:09:18.919 "num_blocks": 65536, 00:09:18.919 "uuid": "8a0c72d4-ac00-4931-8c82-95b79a10ce4d", 00:09:18.919 "assigned_rate_limits": { 00:09:18.919 "rw_ios_per_sec": 0, 00:09:18.919 "rw_mbytes_per_sec": 0, 00:09:18.919 "r_mbytes_per_sec": 0, 00:09:18.919 "w_mbytes_per_sec": 0 00:09:18.919 }, 00:09:18.919 "claimed": true, 00:09:18.919 "claim_type": "exclusive_write", 00:09:18.919 "zoned": false, 00:09:18.919 "supported_io_types": { 00:09:18.919 "read": true, 00:09:18.919 "write": true, 00:09:18.919 "unmap": true, 00:09:18.919 "flush": true, 00:09:18.919 "reset": true, 00:09:18.919 "nvme_admin": false, 00:09:18.919 "nvme_io": false, 00:09:18.919 "nvme_io_md": false, 00:09:18.919 "write_zeroes": true, 00:09:18.919 "zcopy": true, 00:09:18.919 "get_zone_info": false, 00:09:18.919 "zone_management": false, 00:09:18.919 "zone_append": false, 00:09:18.919 "compare": false, 00:09:18.919 "compare_and_write": false, 00:09:18.919 "abort": true, 00:09:18.919 "seek_hole": false, 00:09:18.919 "seek_data": false, 00:09:18.919 "copy": true, 00:09:18.919 "nvme_iov_md": false 00:09:18.919 }, 00:09:18.919 "memory_domains": [ 00:09:18.919 { 00:09:18.919 "dma_device_id": "system", 00:09:18.919 "dma_device_type": 1 00:09:18.919 }, 00:09:18.919 { 00:09:18.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.919 "dma_device_type": 2 00:09:18.919 } 00:09:18.919 ], 00:09:18.919 "driver_specific": {} 00:09:18.919 } 00:09:18.919 ] 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.919 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.179 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.179 "name": "Existed_Raid", 00:09:19.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.179 "strip_size_kb": 64, 00:09:19.179 "state": "configuring", 00:09:19.179 "raid_level": "concat", 00:09:19.179 "superblock": false, 00:09:19.179 "num_base_bdevs": 3, 00:09:19.179 "num_base_bdevs_discovered": 2, 00:09:19.179 "num_base_bdevs_operational": 3, 00:09:19.179 "base_bdevs_list": [ 00:09:19.179 { 00:09:19.179 "name": "BaseBdev1", 00:09:19.179 "uuid": "108dc8d4-be83-46f1-8220-74c2e82f734f", 00:09:19.179 "is_configured": true, 00:09:19.179 "data_offset": 0, 00:09:19.179 "data_size": 65536 00:09:19.179 }, 00:09:19.179 { 00:09:19.179 "name": "BaseBdev2", 00:09:19.179 "uuid": "8a0c72d4-ac00-4931-8c82-95b79a10ce4d", 00:09:19.179 "is_configured": true, 00:09:19.179 "data_offset": 0, 00:09:19.179 "data_size": 65536 00:09:19.179 }, 00:09:19.179 { 00:09:19.179 "name": "BaseBdev3", 00:09:19.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.179 "is_configured": false, 00:09:19.179 "data_offset": 0, 00:09:19.179 "data_size": 0 00:09:19.179 } 00:09:19.179 ] 00:09:19.179 }' 00:09:19.179 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.179 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.440 [2024-11-18 03:58:15.991596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.440 [2024-11-18 03:58:15.991647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:19.440 [2024-11-18 03:58:15.991661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:19.440 [2024-11-18 03:58:15.991989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.440 [2024-11-18 03:58:15.992171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:19.440 [2024-11-18 03:58:15.992182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:19.440 [2024-11-18 03:58:15.992483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.440 BaseBdev3 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.440 03:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.440 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.440 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:19.440 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.440 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.440 [ 00:09:19.440 { 00:09:19.440 "name": "BaseBdev3", 00:09:19.440 "aliases": [ 00:09:19.440 "d8deed70-1051-41f2-9321-8d5379451054" 00:09:19.440 ], 00:09:19.440 "product_name": "Malloc disk", 00:09:19.440 "block_size": 512, 00:09:19.440 "num_blocks": 65536, 00:09:19.440 "uuid": "d8deed70-1051-41f2-9321-8d5379451054", 00:09:19.440 "assigned_rate_limits": { 00:09:19.440 "rw_ios_per_sec": 0, 00:09:19.440 "rw_mbytes_per_sec": 0, 00:09:19.440 "r_mbytes_per_sec": 0, 00:09:19.440 "w_mbytes_per_sec": 0 00:09:19.440 }, 00:09:19.440 "claimed": true, 00:09:19.440 "claim_type": "exclusive_write", 00:09:19.440 "zoned": false, 00:09:19.440 "supported_io_types": { 00:09:19.440 "read": true, 00:09:19.440 "write": true, 00:09:19.440 "unmap": true, 00:09:19.440 "flush": true, 00:09:19.440 "reset": true, 00:09:19.440 "nvme_admin": false, 00:09:19.440 "nvme_io": false, 00:09:19.440 "nvme_io_md": false, 00:09:19.440 "write_zeroes": true, 00:09:19.440 "zcopy": true, 00:09:19.440 "get_zone_info": false, 00:09:19.440 "zone_management": false, 00:09:19.440 "zone_append": false, 00:09:19.440 "compare": false, 00:09:19.440 "compare_and_write": false, 00:09:19.440 "abort": true, 00:09:19.440 "seek_hole": false, 00:09:19.440 "seek_data": false, 00:09:19.440 "copy": true, 00:09:19.440 "nvme_iov_md": false 00:09:19.440 }, 00:09:19.440 "memory_domains": [ 00:09:19.440 { 00:09:19.440 "dma_device_id": "system", 00:09:19.440 "dma_device_type": 1 00:09:19.440 }, 00:09:19.440 { 00:09:19.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.441 "dma_device_type": 2 00:09:19.441 } 00:09:19.441 ], 00:09:19.441 "driver_specific": {} 00:09:19.441 } 00:09:19.441 ] 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.441 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.700 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.700 "name": "Existed_Raid", 00:09:19.700 "uuid": "345035f5-0c03-4d4d-8588-aad666cff0e0", 00:09:19.700 "strip_size_kb": 64, 00:09:19.700 "state": "online", 00:09:19.700 "raid_level": "concat", 00:09:19.700 "superblock": false, 00:09:19.700 "num_base_bdevs": 3, 00:09:19.700 "num_base_bdevs_discovered": 3, 00:09:19.701 "num_base_bdevs_operational": 3, 00:09:19.701 "base_bdevs_list": [ 00:09:19.701 { 00:09:19.701 "name": "BaseBdev1", 00:09:19.701 "uuid": "108dc8d4-be83-46f1-8220-74c2e82f734f", 00:09:19.701 "is_configured": true, 00:09:19.701 "data_offset": 0, 00:09:19.701 "data_size": 65536 00:09:19.701 }, 00:09:19.701 { 00:09:19.701 "name": "BaseBdev2", 00:09:19.701 "uuid": "8a0c72d4-ac00-4931-8c82-95b79a10ce4d", 00:09:19.701 "is_configured": true, 00:09:19.701 "data_offset": 0, 00:09:19.701 "data_size": 65536 00:09:19.701 }, 00:09:19.701 { 00:09:19.701 "name": "BaseBdev3", 00:09:19.701 "uuid": "d8deed70-1051-41f2-9321-8d5379451054", 00:09:19.701 "is_configured": true, 00:09:19.701 "data_offset": 0, 00:09:19.701 "data_size": 65536 00:09:19.701 } 00:09:19.701 ] 00:09:19.701 }' 00:09:19.701 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.701 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.961 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.962 [2024-11-18 03:58:16.451259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:19.962 "name": "Existed_Raid", 00:09:19.962 "aliases": [ 00:09:19.962 "345035f5-0c03-4d4d-8588-aad666cff0e0" 00:09:19.962 ], 00:09:19.962 "product_name": "Raid Volume", 00:09:19.962 "block_size": 512, 00:09:19.962 "num_blocks": 196608, 00:09:19.962 "uuid": "345035f5-0c03-4d4d-8588-aad666cff0e0", 00:09:19.962 "assigned_rate_limits": { 00:09:19.962 "rw_ios_per_sec": 0, 00:09:19.962 "rw_mbytes_per_sec": 0, 00:09:19.962 "r_mbytes_per_sec": 0, 00:09:19.962 "w_mbytes_per_sec": 0 00:09:19.962 }, 00:09:19.962 "claimed": false, 00:09:19.962 "zoned": false, 00:09:19.962 "supported_io_types": { 00:09:19.962 "read": true, 00:09:19.962 "write": true, 00:09:19.962 "unmap": true, 00:09:19.962 "flush": true, 00:09:19.962 "reset": true, 00:09:19.962 "nvme_admin": false, 00:09:19.962 "nvme_io": false, 00:09:19.962 "nvme_io_md": false, 00:09:19.962 "write_zeroes": true, 00:09:19.962 "zcopy": false, 00:09:19.962 "get_zone_info": false, 00:09:19.962 "zone_management": false, 00:09:19.962 "zone_append": false, 00:09:19.962 "compare": false, 00:09:19.962 "compare_and_write": false, 00:09:19.962 "abort": false, 00:09:19.962 "seek_hole": false, 00:09:19.962 "seek_data": false, 00:09:19.962 "copy": false, 00:09:19.962 "nvme_iov_md": false 00:09:19.962 }, 00:09:19.962 "memory_domains": [ 00:09:19.962 { 00:09:19.962 "dma_device_id": "system", 00:09:19.962 "dma_device_type": 1 00:09:19.962 }, 00:09:19.962 { 00:09:19.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.962 "dma_device_type": 2 00:09:19.962 }, 00:09:19.962 { 00:09:19.962 "dma_device_id": "system", 00:09:19.962 "dma_device_type": 1 00:09:19.962 }, 00:09:19.962 { 00:09:19.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.962 "dma_device_type": 2 00:09:19.962 }, 00:09:19.962 { 00:09:19.962 "dma_device_id": "system", 00:09:19.962 "dma_device_type": 1 00:09:19.962 }, 00:09:19.962 { 00:09:19.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.962 "dma_device_type": 2 00:09:19.962 } 00:09:19.962 ], 00:09:19.962 "driver_specific": { 00:09:19.962 "raid": { 00:09:19.962 "uuid": "345035f5-0c03-4d4d-8588-aad666cff0e0", 00:09:19.962 "strip_size_kb": 64, 00:09:19.962 "state": "online", 00:09:19.962 "raid_level": "concat", 00:09:19.962 "superblock": false, 00:09:19.962 "num_base_bdevs": 3, 00:09:19.962 "num_base_bdevs_discovered": 3, 00:09:19.962 "num_base_bdevs_operational": 3, 00:09:19.962 "base_bdevs_list": [ 00:09:19.962 { 00:09:19.962 "name": "BaseBdev1", 00:09:19.962 "uuid": "108dc8d4-be83-46f1-8220-74c2e82f734f", 00:09:19.962 "is_configured": true, 00:09:19.962 "data_offset": 0, 00:09:19.962 "data_size": 65536 00:09:19.962 }, 00:09:19.962 { 00:09:19.962 "name": "BaseBdev2", 00:09:19.962 "uuid": "8a0c72d4-ac00-4931-8c82-95b79a10ce4d", 00:09:19.962 "is_configured": true, 00:09:19.962 "data_offset": 0, 00:09:19.962 "data_size": 65536 00:09:19.962 }, 00:09:19.962 { 00:09:19.962 "name": "BaseBdev3", 00:09:19.962 "uuid": "d8deed70-1051-41f2-9321-8d5379451054", 00:09:19.962 "is_configured": true, 00:09:19.962 "data_offset": 0, 00:09:19.962 "data_size": 65536 00:09:19.962 } 00:09:19.962 ] 00:09:19.962 } 00:09:19.962 } 00:09:19.962 }' 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:19.962 BaseBdev2 00:09:19.962 BaseBdev3' 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.962 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.222 [2024-11-18 03:58:16.698608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.222 [2024-11-18 03:58:16.698642] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.222 [2024-11-18 03:58:16.698700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.222 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.222 "name": "Existed_Raid", 00:09:20.222 "uuid": "345035f5-0c03-4d4d-8588-aad666cff0e0", 00:09:20.222 "strip_size_kb": 64, 00:09:20.222 "state": "offline", 00:09:20.222 "raid_level": "concat", 00:09:20.222 "superblock": false, 00:09:20.222 "num_base_bdevs": 3, 00:09:20.222 "num_base_bdevs_discovered": 2, 00:09:20.222 "num_base_bdevs_operational": 2, 00:09:20.222 "base_bdevs_list": [ 00:09:20.222 { 00:09:20.222 "name": null, 00:09:20.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.223 "is_configured": false, 00:09:20.223 "data_offset": 0, 00:09:20.223 "data_size": 65536 00:09:20.223 }, 00:09:20.223 { 00:09:20.223 "name": "BaseBdev2", 00:09:20.223 "uuid": "8a0c72d4-ac00-4931-8c82-95b79a10ce4d", 00:09:20.223 "is_configured": true, 00:09:20.223 "data_offset": 0, 00:09:20.223 "data_size": 65536 00:09:20.223 }, 00:09:20.223 { 00:09:20.223 "name": "BaseBdev3", 00:09:20.223 "uuid": "d8deed70-1051-41f2-9321-8d5379451054", 00:09:20.223 "is_configured": true, 00:09:20.223 "data_offset": 0, 00:09:20.223 "data_size": 65536 00:09:20.223 } 00:09:20.223 ] 00:09:20.223 }' 00:09:20.223 03:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.223 03:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.790 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:20.790 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.790 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.790 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.790 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 [2024-11-18 03:58:17.299044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:20.791 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.051 [2024-11-18 03:58:17.436623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.051 [2024-11-18 03:58:17.436748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.051 BaseBdev2 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.051 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.051 [ 00:09:21.051 { 00:09:21.051 "name": "BaseBdev2", 00:09:21.051 "aliases": [ 00:09:21.051 "b7b2f48c-ad44-47a3-a7a1-387cc3ee5836" 00:09:21.051 ], 00:09:21.051 "product_name": "Malloc disk", 00:09:21.051 "block_size": 512, 00:09:21.051 "num_blocks": 65536, 00:09:21.051 "uuid": "b7b2f48c-ad44-47a3-a7a1-387cc3ee5836", 00:09:21.051 "assigned_rate_limits": { 00:09:21.051 "rw_ios_per_sec": 0, 00:09:21.051 "rw_mbytes_per_sec": 0, 00:09:21.051 "r_mbytes_per_sec": 0, 00:09:21.051 "w_mbytes_per_sec": 0 00:09:21.051 }, 00:09:21.051 "claimed": false, 00:09:21.051 "zoned": false, 00:09:21.051 "supported_io_types": { 00:09:21.051 "read": true, 00:09:21.052 "write": true, 00:09:21.052 "unmap": true, 00:09:21.052 "flush": true, 00:09:21.052 "reset": true, 00:09:21.052 "nvme_admin": false, 00:09:21.052 "nvme_io": false, 00:09:21.052 "nvme_io_md": false, 00:09:21.052 "write_zeroes": true, 00:09:21.052 "zcopy": true, 00:09:21.052 "get_zone_info": false, 00:09:21.052 "zone_management": false, 00:09:21.052 "zone_append": false, 00:09:21.052 "compare": false, 00:09:21.052 "compare_and_write": false, 00:09:21.052 "abort": true, 00:09:21.052 "seek_hole": false, 00:09:21.052 "seek_data": false, 00:09:21.052 "copy": true, 00:09:21.052 "nvme_iov_md": false 00:09:21.052 }, 00:09:21.052 "memory_domains": [ 00:09:21.052 { 00:09:21.052 "dma_device_id": "system", 00:09:21.052 "dma_device_type": 1 00:09:21.052 }, 00:09:21.052 { 00:09:21.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.052 "dma_device_type": 2 00:09:21.052 } 00:09:21.052 ], 00:09:21.052 "driver_specific": {} 00:09:21.052 } 00:09:21.052 ] 00:09:21.052 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.052 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:21.052 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.052 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.052 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.052 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.052 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.312 BaseBdev3 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.312 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.312 [ 00:09:21.312 { 00:09:21.312 "name": "BaseBdev3", 00:09:21.312 "aliases": [ 00:09:21.312 "ee99570a-0025-4666-aba3-b7b4b80c4c80" 00:09:21.312 ], 00:09:21.312 "product_name": "Malloc disk", 00:09:21.312 "block_size": 512, 00:09:21.312 "num_blocks": 65536, 00:09:21.312 "uuid": "ee99570a-0025-4666-aba3-b7b4b80c4c80", 00:09:21.312 "assigned_rate_limits": { 00:09:21.312 "rw_ios_per_sec": 0, 00:09:21.312 "rw_mbytes_per_sec": 0, 00:09:21.312 "r_mbytes_per_sec": 0, 00:09:21.312 "w_mbytes_per_sec": 0 00:09:21.312 }, 00:09:21.312 "claimed": false, 00:09:21.312 "zoned": false, 00:09:21.312 "supported_io_types": { 00:09:21.312 "read": true, 00:09:21.312 "write": true, 00:09:21.312 "unmap": true, 00:09:21.312 "flush": true, 00:09:21.312 "reset": true, 00:09:21.312 "nvme_admin": false, 00:09:21.312 "nvme_io": false, 00:09:21.312 "nvme_io_md": false, 00:09:21.312 "write_zeroes": true, 00:09:21.312 "zcopy": true, 00:09:21.312 "get_zone_info": false, 00:09:21.312 "zone_management": false, 00:09:21.312 "zone_append": false, 00:09:21.312 "compare": false, 00:09:21.312 "compare_and_write": false, 00:09:21.312 "abort": true, 00:09:21.312 "seek_hole": false, 00:09:21.312 "seek_data": false, 00:09:21.312 "copy": true, 00:09:21.312 "nvme_iov_md": false 00:09:21.312 }, 00:09:21.312 "memory_domains": [ 00:09:21.312 { 00:09:21.313 "dma_device_id": "system", 00:09:21.313 "dma_device_type": 1 00:09:21.313 }, 00:09:21.313 { 00:09:21.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.313 "dma_device_type": 2 00:09:21.313 } 00:09:21.313 ], 00:09:21.313 "driver_specific": {} 00:09:21.313 } 00:09:21.313 ] 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.313 [2024-11-18 03:58:17.744748] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.313 [2024-11-18 03:58:17.744855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.313 [2024-11-18 03:58:17.744923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.313 [2024-11-18 03:58:17.746669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.313 "name": "Existed_Raid", 00:09:21.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.313 "strip_size_kb": 64, 00:09:21.313 "state": "configuring", 00:09:21.313 "raid_level": "concat", 00:09:21.313 "superblock": false, 00:09:21.313 "num_base_bdevs": 3, 00:09:21.313 "num_base_bdevs_discovered": 2, 00:09:21.313 "num_base_bdevs_operational": 3, 00:09:21.313 "base_bdevs_list": [ 00:09:21.313 { 00:09:21.313 "name": "BaseBdev1", 00:09:21.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.313 "is_configured": false, 00:09:21.313 "data_offset": 0, 00:09:21.313 "data_size": 0 00:09:21.313 }, 00:09:21.313 { 00:09:21.313 "name": "BaseBdev2", 00:09:21.313 "uuid": "b7b2f48c-ad44-47a3-a7a1-387cc3ee5836", 00:09:21.313 "is_configured": true, 00:09:21.313 "data_offset": 0, 00:09:21.313 "data_size": 65536 00:09:21.313 }, 00:09:21.313 { 00:09:21.313 "name": "BaseBdev3", 00:09:21.313 "uuid": "ee99570a-0025-4666-aba3-b7b4b80c4c80", 00:09:21.313 "is_configured": true, 00:09:21.313 "data_offset": 0, 00:09:21.313 "data_size": 65536 00:09:21.313 } 00:09:21.313 ] 00:09:21.313 }' 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.313 03:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.573 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:21.573 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.573 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.832 [2024-11-18 03:58:18.215988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.832 "name": "Existed_Raid", 00:09:21.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.832 "strip_size_kb": 64, 00:09:21.832 "state": "configuring", 00:09:21.832 "raid_level": "concat", 00:09:21.832 "superblock": false, 00:09:21.832 "num_base_bdevs": 3, 00:09:21.832 "num_base_bdevs_discovered": 1, 00:09:21.832 "num_base_bdevs_operational": 3, 00:09:21.832 "base_bdevs_list": [ 00:09:21.832 { 00:09:21.832 "name": "BaseBdev1", 00:09:21.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.832 "is_configured": false, 00:09:21.832 "data_offset": 0, 00:09:21.832 "data_size": 0 00:09:21.832 }, 00:09:21.832 { 00:09:21.832 "name": null, 00:09:21.832 "uuid": "b7b2f48c-ad44-47a3-a7a1-387cc3ee5836", 00:09:21.832 "is_configured": false, 00:09:21.832 "data_offset": 0, 00:09:21.832 "data_size": 65536 00:09:21.832 }, 00:09:21.832 { 00:09:21.832 "name": "BaseBdev3", 00:09:21.832 "uuid": "ee99570a-0025-4666-aba3-b7b4b80c4c80", 00:09:21.832 "is_configured": true, 00:09:21.832 "data_offset": 0, 00:09:21.832 "data_size": 65536 00:09:21.832 } 00:09:21.832 ] 00:09:21.832 }' 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.832 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.091 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.091 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.091 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.091 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.091 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.351 [2024-11-18 03:58:18.772021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.351 BaseBdev1 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.351 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.352 [ 00:09:22.352 { 00:09:22.352 "name": "BaseBdev1", 00:09:22.352 "aliases": [ 00:09:22.352 "625f0ed1-d680-46a8-8be8-eb00be1a90c1" 00:09:22.352 ], 00:09:22.352 "product_name": "Malloc disk", 00:09:22.352 "block_size": 512, 00:09:22.352 "num_blocks": 65536, 00:09:22.352 "uuid": "625f0ed1-d680-46a8-8be8-eb00be1a90c1", 00:09:22.352 "assigned_rate_limits": { 00:09:22.352 "rw_ios_per_sec": 0, 00:09:22.352 "rw_mbytes_per_sec": 0, 00:09:22.352 "r_mbytes_per_sec": 0, 00:09:22.352 "w_mbytes_per_sec": 0 00:09:22.352 }, 00:09:22.352 "claimed": true, 00:09:22.352 "claim_type": "exclusive_write", 00:09:22.352 "zoned": false, 00:09:22.352 "supported_io_types": { 00:09:22.352 "read": true, 00:09:22.352 "write": true, 00:09:22.352 "unmap": true, 00:09:22.352 "flush": true, 00:09:22.352 "reset": true, 00:09:22.352 "nvme_admin": false, 00:09:22.352 "nvme_io": false, 00:09:22.352 "nvme_io_md": false, 00:09:22.352 "write_zeroes": true, 00:09:22.352 "zcopy": true, 00:09:22.352 "get_zone_info": false, 00:09:22.352 "zone_management": false, 00:09:22.352 "zone_append": false, 00:09:22.352 "compare": false, 00:09:22.352 "compare_and_write": false, 00:09:22.352 "abort": true, 00:09:22.352 "seek_hole": false, 00:09:22.352 "seek_data": false, 00:09:22.352 "copy": true, 00:09:22.352 "nvme_iov_md": false 00:09:22.352 }, 00:09:22.352 "memory_domains": [ 00:09:22.352 { 00:09:22.352 "dma_device_id": "system", 00:09:22.352 "dma_device_type": 1 00:09:22.352 }, 00:09:22.352 { 00:09:22.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.352 "dma_device_type": 2 00:09:22.352 } 00:09:22.352 ], 00:09:22.352 "driver_specific": {} 00:09:22.352 } 00:09:22.352 ] 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.352 "name": "Existed_Raid", 00:09:22.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.352 "strip_size_kb": 64, 00:09:22.352 "state": "configuring", 00:09:22.352 "raid_level": "concat", 00:09:22.352 "superblock": false, 00:09:22.352 "num_base_bdevs": 3, 00:09:22.352 "num_base_bdevs_discovered": 2, 00:09:22.352 "num_base_bdevs_operational": 3, 00:09:22.352 "base_bdevs_list": [ 00:09:22.352 { 00:09:22.352 "name": "BaseBdev1", 00:09:22.352 "uuid": "625f0ed1-d680-46a8-8be8-eb00be1a90c1", 00:09:22.352 "is_configured": true, 00:09:22.352 "data_offset": 0, 00:09:22.352 "data_size": 65536 00:09:22.352 }, 00:09:22.352 { 00:09:22.352 "name": null, 00:09:22.352 "uuid": "b7b2f48c-ad44-47a3-a7a1-387cc3ee5836", 00:09:22.352 "is_configured": false, 00:09:22.352 "data_offset": 0, 00:09:22.352 "data_size": 65536 00:09:22.352 }, 00:09:22.352 { 00:09:22.352 "name": "BaseBdev3", 00:09:22.352 "uuid": "ee99570a-0025-4666-aba3-b7b4b80c4c80", 00:09:22.352 "is_configured": true, 00:09:22.352 "data_offset": 0, 00:09:22.352 "data_size": 65536 00:09:22.352 } 00:09:22.352 ] 00:09:22.352 }' 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.352 03:58:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.612 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.612 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.612 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.612 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:22.612 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.870 [2024-11-18 03:58:19.275348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.870 "name": "Existed_Raid", 00:09:22.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.870 "strip_size_kb": 64, 00:09:22.870 "state": "configuring", 00:09:22.870 "raid_level": "concat", 00:09:22.870 "superblock": false, 00:09:22.870 "num_base_bdevs": 3, 00:09:22.870 "num_base_bdevs_discovered": 1, 00:09:22.870 "num_base_bdevs_operational": 3, 00:09:22.870 "base_bdevs_list": [ 00:09:22.870 { 00:09:22.870 "name": "BaseBdev1", 00:09:22.870 "uuid": "625f0ed1-d680-46a8-8be8-eb00be1a90c1", 00:09:22.870 "is_configured": true, 00:09:22.870 "data_offset": 0, 00:09:22.870 "data_size": 65536 00:09:22.870 }, 00:09:22.870 { 00:09:22.870 "name": null, 00:09:22.870 "uuid": "b7b2f48c-ad44-47a3-a7a1-387cc3ee5836", 00:09:22.870 "is_configured": false, 00:09:22.870 "data_offset": 0, 00:09:22.870 "data_size": 65536 00:09:22.870 }, 00:09:22.870 { 00:09:22.870 "name": null, 00:09:22.870 "uuid": "ee99570a-0025-4666-aba3-b7b4b80c4c80", 00:09:22.870 "is_configured": false, 00:09:22.870 "data_offset": 0, 00:09:22.870 "data_size": 65536 00:09:22.870 } 00:09:22.870 ] 00:09:22.870 }' 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.870 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.128 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.128 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.128 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.128 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.128 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.388 [2024-11-18 03:58:19.778519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.388 "name": "Existed_Raid", 00:09:23.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.388 "strip_size_kb": 64, 00:09:23.388 "state": "configuring", 00:09:23.388 "raid_level": "concat", 00:09:23.388 "superblock": false, 00:09:23.388 "num_base_bdevs": 3, 00:09:23.388 "num_base_bdevs_discovered": 2, 00:09:23.388 "num_base_bdevs_operational": 3, 00:09:23.388 "base_bdevs_list": [ 00:09:23.388 { 00:09:23.388 "name": "BaseBdev1", 00:09:23.388 "uuid": "625f0ed1-d680-46a8-8be8-eb00be1a90c1", 00:09:23.388 "is_configured": true, 00:09:23.388 "data_offset": 0, 00:09:23.388 "data_size": 65536 00:09:23.388 }, 00:09:23.388 { 00:09:23.388 "name": null, 00:09:23.388 "uuid": "b7b2f48c-ad44-47a3-a7a1-387cc3ee5836", 00:09:23.388 "is_configured": false, 00:09:23.388 "data_offset": 0, 00:09:23.388 "data_size": 65536 00:09:23.388 }, 00:09:23.388 { 00:09:23.388 "name": "BaseBdev3", 00:09:23.388 "uuid": "ee99570a-0025-4666-aba3-b7b4b80c4c80", 00:09:23.388 "is_configured": true, 00:09:23.388 "data_offset": 0, 00:09:23.388 "data_size": 65536 00:09:23.388 } 00:09:23.388 ] 00:09:23.388 }' 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.388 03:58:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.648 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.648 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.648 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.648 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.648 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.908 [2024-11-18 03:58:20.297643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.908 "name": "Existed_Raid", 00:09:23.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.908 "strip_size_kb": 64, 00:09:23.908 "state": "configuring", 00:09:23.908 "raid_level": "concat", 00:09:23.908 "superblock": false, 00:09:23.908 "num_base_bdevs": 3, 00:09:23.908 "num_base_bdevs_discovered": 1, 00:09:23.908 "num_base_bdevs_operational": 3, 00:09:23.908 "base_bdevs_list": [ 00:09:23.908 { 00:09:23.908 "name": null, 00:09:23.908 "uuid": "625f0ed1-d680-46a8-8be8-eb00be1a90c1", 00:09:23.908 "is_configured": false, 00:09:23.908 "data_offset": 0, 00:09:23.908 "data_size": 65536 00:09:23.908 }, 00:09:23.908 { 00:09:23.908 "name": null, 00:09:23.908 "uuid": "b7b2f48c-ad44-47a3-a7a1-387cc3ee5836", 00:09:23.908 "is_configured": false, 00:09:23.908 "data_offset": 0, 00:09:23.908 "data_size": 65536 00:09:23.908 }, 00:09:23.908 { 00:09:23.908 "name": "BaseBdev3", 00:09:23.908 "uuid": "ee99570a-0025-4666-aba3-b7b4b80c4c80", 00:09:23.908 "is_configured": true, 00:09:23.908 "data_offset": 0, 00:09:23.908 "data_size": 65536 00:09:23.908 } 00:09:23.908 ] 00:09:23.908 }' 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.908 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.478 [2024-11-18 03:58:20.859527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.478 "name": "Existed_Raid", 00:09:24.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.478 "strip_size_kb": 64, 00:09:24.478 "state": "configuring", 00:09:24.478 "raid_level": "concat", 00:09:24.478 "superblock": false, 00:09:24.478 "num_base_bdevs": 3, 00:09:24.478 "num_base_bdevs_discovered": 2, 00:09:24.478 "num_base_bdevs_operational": 3, 00:09:24.478 "base_bdevs_list": [ 00:09:24.478 { 00:09:24.478 "name": null, 00:09:24.478 "uuid": "625f0ed1-d680-46a8-8be8-eb00be1a90c1", 00:09:24.478 "is_configured": false, 00:09:24.478 "data_offset": 0, 00:09:24.478 "data_size": 65536 00:09:24.478 }, 00:09:24.478 { 00:09:24.478 "name": "BaseBdev2", 00:09:24.478 "uuid": "b7b2f48c-ad44-47a3-a7a1-387cc3ee5836", 00:09:24.478 "is_configured": true, 00:09:24.478 "data_offset": 0, 00:09:24.478 "data_size": 65536 00:09:24.478 }, 00:09:24.478 { 00:09:24.478 "name": "BaseBdev3", 00:09:24.478 "uuid": "ee99570a-0025-4666-aba3-b7b4b80c4c80", 00:09:24.478 "is_configured": true, 00:09:24.478 "data_offset": 0, 00:09:24.478 "data_size": 65536 00:09:24.478 } 00:09:24.478 ] 00:09:24.478 }' 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.478 03:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.737 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.737 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.737 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.737 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:24.737 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.996 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:24.996 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:24.996 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.996 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.996 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.996 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.996 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 625f0ed1-d680-46a8-8be8-eb00be1a90c1 00:09:24.996 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.996 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.996 [2024-11-18 03:58:21.467771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:24.996 [2024-11-18 03:58:21.467922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:24.996 [2024-11-18 03:58:21.467957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:24.996 [2024-11-18 03:58:21.468272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:24.996 [2024-11-18 03:58:21.468489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:24.996 [2024-11-18 03:58:21.468537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:24.997 [2024-11-18 03:58:21.468811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.997 NewBaseBdev 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.997 [ 00:09:24.997 { 00:09:24.997 "name": "NewBaseBdev", 00:09:24.997 "aliases": [ 00:09:24.997 "625f0ed1-d680-46a8-8be8-eb00be1a90c1" 00:09:24.997 ], 00:09:24.997 "product_name": "Malloc disk", 00:09:24.997 "block_size": 512, 00:09:24.997 "num_blocks": 65536, 00:09:24.997 "uuid": "625f0ed1-d680-46a8-8be8-eb00be1a90c1", 00:09:24.997 "assigned_rate_limits": { 00:09:24.997 "rw_ios_per_sec": 0, 00:09:24.997 "rw_mbytes_per_sec": 0, 00:09:24.997 "r_mbytes_per_sec": 0, 00:09:24.997 "w_mbytes_per_sec": 0 00:09:24.997 }, 00:09:24.997 "claimed": true, 00:09:24.997 "claim_type": "exclusive_write", 00:09:24.997 "zoned": false, 00:09:24.997 "supported_io_types": { 00:09:24.997 "read": true, 00:09:24.997 "write": true, 00:09:24.997 "unmap": true, 00:09:24.997 "flush": true, 00:09:24.997 "reset": true, 00:09:24.997 "nvme_admin": false, 00:09:24.997 "nvme_io": false, 00:09:24.997 "nvme_io_md": false, 00:09:24.997 "write_zeroes": true, 00:09:24.997 "zcopy": true, 00:09:24.997 "get_zone_info": false, 00:09:24.997 "zone_management": false, 00:09:24.997 "zone_append": false, 00:09:24.997 "compare": false, 00:09:24.997 "compare_and_write": false, 00:09:24.997 "abort": true, 00:09:24.997 "seek_hole": false, 00:09:24.997 "seek_data": false, 00:09:24.997 "copy": true, 00:09:24.997 "nvme_iov_md": false 00:09:24.997 }, 00:09:24.997 "memory_domains": [ 00:09:24.997 { 00:09:24.997 "dma_device_id": "system", 00:09:24.997 "dma_device_type": 1 00:09:24.997 }, 00:09:24.997 { 00:09:24.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.997 "dma_device_type": 2 00:09:24.997 } 00:09:24.997 ], 00:09:24.997 "driver_specific": {} 00:09:24.997 } 00:09:24.997 ] 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.997 "name": "Existed_Raid", 00:09:24.997 "uuid": "4d691a88-3af7-4094-8f86-674dacbd271c", 00:09:24.997 "strip_size_kb": 64, 00:09:24.997 "state": "online", 00:09:24.997 "raid_level": "concat", 00:09:24.997 "superblock": false, 00:09:24.997 "num_base_bdevs": 3, 00:09:24.997 "num_base_bdevs_discovered": 3, 00:09:24.997 "num_base_bdevs_operational": 3, 00:09:24.997 "base_bdevs_list": [ 00:09:24.997 { 00:09:24.997 "name": "NewBaseBdev", 00:09:24.997 "uuid": "625f0ed1-d680-46a8-8be8-eb00be1a90c1", 00:09:24.997 "is_configured": true, 00:09:24.997 "data_offset": 0, 00:09:24.997 "data_size": 65536 00:09:24.997 }, 00:09:24.997 { 00:09:24.997 "name": "BaseBdev2", 00:09:24.997 "uuid": "b7b2f48c-ad44-47a3-a7a1-387cc3ee5836", 00:09:24.997 "is_configured": true, 00:09:24.997 "data_offset": 0, 00:09:24.997 "data_size": 65536 00:09:24.997 }, 00:09:24.997 { 00:09:24.997 "name": "BaseBdev3", 00:09:24.997 "uuid": "ee99570a-0025-4666-aba3-b7b4b80c4c80", 00:09:24.997 "is_configured": true, 00:09:24.997 "data_offset": 0, 00:09:24.997 "data_size": 65536 00:09:24.997 } 00:09:24.997 ] 00:09:24.997 }' 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.997 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.567 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.567 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.567 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.567 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.567 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.568 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.568 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.568 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.568 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.568 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.568 [2024-11-18 03:58:21.927566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.568 03:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.568 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.568 "name": "Existed_Raid", 00:09:25.568 "aliases": [ 00:09:25.568 "4d691a88-3af7-4094-8f86-674dacbd271c" 00:09:25.568 ], 00:09:25.568 "product_name": "Raid Volume", 00:09:25.568 "block_size": 512, 00:09:25.568 "num_blocks": 196608, 00:09:25.568 "uuid": "4d691a88-3af7-4094-8f86-674dacbd271c", 00:09:25.568 "assigned_rate_limits": { 00:09:25.568 "rw_ios_per_sec": 0, 00:09:25.568 "rw_mbytes_per_sec": 0, 00:09:25.568 "r_mbytes_per_sec": 0, 00:09:25.568 "w_mbytes_per_sec": 0 00:09:25.568 }, 00:09:25.568 "claimed": false, 00:09:25.568 "zoned": false, 00:09:25.568 "supported_io_types": { 00:09:25.568 "read": true, 00:09:25.568 "write": true, 00:09:25.568 "unmap": true, 00:09:25.568 "flush": true, 00:09:25.568 "reset": true, 00:09:25.568 "nvme_admin": false, 00:09:25.568 "nvme_io": false, 00:09:25.568 "nvme_io_md": false, 00:09:25.568 "write_zeroes": true, 00:09:25.568 "zcopy": false, 00:09:25.568 "get_zone_info": false, 00:09:25.568 "zone_management": false, 00:09:25.568 "zone_append": false, 00:09:25.568 "compare": false, 00:09:25.568 "compare_and_write": false, 00:09:25.568 "abort": false, 00:09:25.568 "seek_hole": false, 00:09:25.568 "seek_data": false, 00:09:25.568 "copy": false, 00:09:25.568 "nvme_iov_md": false 00:09:25.568 }, 00:09:25.568 "memory_domains": [ 00:09:25.568 { 00:09:25.568 "dma_device_id": "system", 00:09:25.568 "dma_device_type": 1 00:09:25.568 }, 00:09:25.568 { 00:09:25.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.568 "dma_device_type": 2 00:09:25.568 }, 00:09:25.568 { 00:09:25.568 "dma_device_id": "system", 00:09:25.568 "dma_device_type": 1 00:09:25.568 }, 00:09:25.568 { 00:09:25.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.568 "dma_device_type": 2 00:09:25.568 }, 00:09:25.568 { 00:09:25.568 "dma_device_id": "system", 00:09:25.568 "dma_device_type": 1 00:09:25.568 }, 00:09:25.568 { 00:09:25.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.568 "dma_device_type": 2 00:09:25.568 } 00:09:25.568 ], 00:09:25.568 "driver_specific": { 00:09:25.568 "raid": { 00:09:25.568 "uuid": "4d691a88-3af7-4094-8f86-674dacbd271c", 00:09:25.568 "strip_size_kb": 64, 00:09:25.568 "state": "online", 00:09:25.568 "raid_level": "concat", 00:09:25.568 "superblock": false, 00:09:25.568 "num_base_bdevs": 3, 00:09:25.568 "num_base_bdevs_discovered": 3, 00:09:25.568 "num_base_bdevs_operational": 3, 00:09:25.568 "base_bdevs_list": [ 00:09:25.568 { 00:09:25.568 "name": "NewBaseBdev", 00:09:25.568 "uuid": "625f0ed1-d680-46a8-8be8-eb00be1a90c1", 00:09:25.568 "is_configured": true, 00:09:25.568 "data_offset": 0, 00:09:25.568 "data_size": 65536 00:09:25.568 }, 00:09:25.568 { 00:09:25.568 "name": "BaseBdev2", 00:09:25.568 "uuid": "b7b2f48c-ad44-47a3-a7a1-387cc3ee5836", 00:09:25.568 "is_configured": true, 00:09:25.568 "data_offset": 0, 00:09:25.568 "data_size": 65536 00:09:25.568 }, 00:09:25.568 { 00:09:25.568 "name": "BaseBdev3", 00:09:25.568 "uuid": "ee99570a-0025-4666-aba3-b7b4b80c4c80", 00:09:25.568 "is_configured": true, 00:09:25.568 "data_offset": 0, 00:09:25.568 "data_size": 65536 00:09:25.568 } 00:09:25.568 ] 00:09:25.568 } 00:09:25.568 } 00:09:25.568 }' 00:09:25.568 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.568 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:25.568 BaseBdev2 00:09:25.568 BaseBdev3' 00:09:25.568 03:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.568 [2024-11-18 03:58:22.190797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.568 [2024-11-18 03:58:22.190894] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.568 [2024-11-18 03:58:22.191031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.568 [2024-11-18 03:58:22.191128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.568 [2024-11-18 03:58:22.191191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65587 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65587 ']' 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65587 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.568 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65587 00:09:25.828 killing process with pid 65587 00:09:25.828 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.828 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.828 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65587' 00:09:25.828 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65587 00:09:25.828 [2024-11-18 03:58:22.235309] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.828 03:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65587 00:09:26.088 [2024-11-18 03:58:22.536675] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.026 03:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:27.026 00:09:27.026 real 0m10.584s 00:09:27.026 user 0m16.887s 00:09:27.026 sys 0m1.837s 00:09:27.027 03:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.027 ************************************ 00:09:27.027 END TEST raid_state_function_test 00:09:27.027 ************************************ 00:09:27.027 03:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.027 03:58:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:27.027 03:58:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:27.027 03:58:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.027 03:58:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.027 ************************************ 00:09:27.027 START TEST raid_state_function_test_sb 00:09:27.027 ************************************ 00:09:27.027 03:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:27.027 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:27.027 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:27.027 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:27.027 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:27.286 Process raid pid: 66208 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66208 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:27.286 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66208' 00:09:27.287 03:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66208 00:09:27.287 03:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66208 ']' 00:09:27.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.287 03:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.287 03:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.287 03:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.287 03:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.287 03:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.287 [2024-11-18 03:58:23.760751] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:27.287 [2024-11-18 03:58:23.760901] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.547 [2024-11-18 03:58:23.936565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.547 [2024-11-18 03:58:24.049902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.808 [2024-11-18 03:58:24.252471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.808 [2024-11-18 03:58:24.252508] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.068 [2024-11-18 03:58:24.583772] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.068 [2024-11-18 03:58:24.583934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.068 [2024-11-18 03:58:24.583969] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.068 [2024-11-18 03:58:24.583982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.068 [2024-11-18 03:58:24.583991] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:28.068 [2024-11-18 03:58:24.584002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.068 "name": "Existed_Raid", 00:09:28.068 "uuid": "3b23a052-e08a-4ad5-896f-ecc6fdbf065d", 00:09:28.068 "strip_size_kb": 64, 00:09:28.068 "state": "configuring", 00:09:28.068 "raid_level": "concat", 00:09:28.068 "superblock": true, 00:09:28.068 "num_base_bdevs": 3, 00:09:28.068 "num_base_bdevs_discovered": 0, 00:09:28.068 "num_base_bdevs_operational": 3, 00:09:28.068 "base_bdevs_list": [ 00:09:28.068 { 00:09:28.068 "name": "BaseBdev1", 00:09:28.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.068 "is_configured": false, 00:09:28.068 "data_offset": 0, 00:09:28.068 "data_size": 0 00:09:28.068 }, 00:09:28.068 { 00:09:28.068 "name": "BaseBdev2", 00:09:28.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.068 "is_configured": false, 00:09:28.068 "data_offset": 0, 00:09:28.068 "data_size": 0 00:09:28.068 }, 00:09:28.068 { 00:09:28.068 "name": "BaseBdev3", 00:09:28.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.068 "is_configured": false, 00:09:28.068 "data_offset": 0, 00:09:28.068 "data_size": 0 00:09:28.068 } 00:09:28.068 ] 00:09:28.068 }' 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.068 03:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.675 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.675 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.675 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.675 [2024-11-18 03:58:25.015018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.675 [2024-11-18 03:58:25.015181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:28.675 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.675 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:28.675 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.675 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.675 [2024-11-18 03:58:25.026957] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.675 [2024-11-18 03:58:25.027052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.675 [2024-11-18 03:58:25.027079] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.675 [2024-11-18 03:58:25.027101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.675 [2024-11-18 03:58:25.027118] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:28.675 [2024-11-18 03:58:25.027138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:28.675 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.675 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.675 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.676 [2024-11-18 03:58:25.081468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.676 BaseBdev1 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.676 [ 00:09:28.676 { 00:09:28.676 "name": "BaseBdev1", 00:09:28.676 "aliases": [ 00:09:28.676 "f788abc5-49c4-4294-bbcc-cb81f1b80ab3" 00:09:28.676 ], 00:09:28.676 "product_name": "Malloc disk", 00:09:28.676 "block_size": 512, 00:09:28.676 "num_blocks": 65536, 00:09:28.676 "uuid": "f788abc5-49c4-4294-bbcc-cb81f1b80ab3", 00:09:28.676 "assigned_rate_limits": { 00:09:28.676 "rw_ios_per_sec": 0, 00:09:28.676 "rw_mbytes_per_sec": 0, 00:09:28.676 "r_mbytes_per_sec": 0, 00:09:28.676 "w_mbytes_per_sec": 0 00:09:28.676 }, 00:09:28.676 "claimed": true, 00:09:28.676 "claim_type": "exclusive_write", 00:09:28.676 "zoned": false, 00:09:28.676 "supported_io_types": { 00:09:28.676 "read": true, 00:09:28.676 "write": true, 00:09:28.676 "unmap": true, 00:09:28.676 "flush": true, 00:09:28.676 "reset": true, 00:09:28.676 "nvme_admin": false, 00:09:28.676 "nvme_io": false, 00:09:28.676 "nvme_io_md": false, 00:09:28.676 "write_zeroes": true, 00:09:28.676 "zcopy": true, 00:09:28.676 "get_zone_info": false, 00:09:28.676 "zone_management": false, 00:09:28.676 "zone_append": false, 00:09:28.676 "compare": false, 00:09:28.676 "compare_and_write": false, 00:09:28.676 "abort": true, 00:09:28.676 "seek_hole": false, 00:09:28.676 "seek_data": false, 00:09:28.676 "copy": true, 00:09:28.676 "nvme_iov_md": false 00:09:28.676 }, 00:09:28.676 "memory_domains": [ 00:09:28.676 { 00:09:28.676 "dma_device_id": "system", 00:09:28.676 "dma_device_type": 1 00:09:28.676 }, 00:09:28.676 { 00:09:28.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.676 "dma_device_type": 2 00:09:28.676 } 00:09:28.676 ], 00:09:28.676 "driver_specific": {} 00:09:28.676 } 00:09:28.676 ] 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.676 "name": "Existed_Raid", 00:09:28.676 "uuid": "d53eb271-80f3-45d7-b36c-9e71cedfe434", 00:09:28.676 "strip_size_kb": 64, 00:09:28.676 "state": "configuring", 00:09:28.676 "raid_level": "concat", 00:09:28.676 "superblock": true, 00:09:28.676 "num_base_bdevs": 3, 00:09:28.676 "num_base_bdevs_discovered": 1, 00:09:28.676 "num_base_bdevs_operational": 3, 00:09:28.676 "base_bdevs_list": [ 00:09:28.676 { 00:09:28.676 "name": "BaseBdev1", 00:09:28.676 "uuid": "f788abc5-49c4-4294-bbcc-cb81f1b80ab3", 00:09:28.676 "is_configured": true, 00:09:28.676 "data_offset": 2048, 00:09:28.676 "data_size": 63488 00:09:28.676 }, 00:09:28.676 { 00:09:28.676 "name": "BaseBdev2", 00:09:28.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.676 "is_configured": false, 00:09:28.676 "data_offset": 0, 00:09:28.676 "data_size": 0 00:09:28.676 }, 00:09:28.676 { 00:09:28.676 "name": "BaseBdev3", 00:09:28.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.676 "is_configured": false, 00:09:28.676 "data_offset": 0, 00:09:28.676 "data_size": 0 00:09:28.676 } 00:09:28.676 ] 00:09:28.676 }' 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.676 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.246 [2024-11-18 03:58:25.588789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.246 [2024-11-18 03:58:25.588900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.246 [2024-11-18 03:58:25.600775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.246 [2024-11-18 03:58:25.602969] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.246 [2024-11-18 03:58:25.603092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.246 [2024-11-18 03:58:25.603108] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.246 [2024-11-18 03:58:25.603117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.246 "name": "Existed_Raid", 00:09:29.246 "uuid": "d07ab966-fd67-4728-a098-840bcf19d039", 00:09:29.246 "strip_size_kb": 64, 00:09:29.246 "state": "configuring", 00:09:29.246 "raid_level": "concat", 00:09:29.246 "superblock": true, 00:09:29.246 "num_base_bdevs": 3, 00:09:29.246 "num_base_bdevs_discovered": 1, 00:09:29.246 "num_base_bdevs_operational": 3, 00:09:29.246 "base_bdevs_list": [ 00:09:29.246 { 00:09:29.246 "name": "BaseBdev1", 00:09:29.246 "uuid": "f788abc5-49c4-4294-bbcc-cb81f1b80ab3", 00:09:29.246 "is_configured": true, 00:09:29.246 "data_offset": 2048, 00:09:29.246 "data_size": 63488 00:09:29.246 }, 00:09:29.246 { 00:09:29.246 "name": "BaseBdev2", 00:09:29.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.246 "is_configured": false, 00:09:29.246 "data_offset": 0, 00:09:29.246 "data_size": 0 00:09:29.246 }, 00:09:29.246 { 00:09:29.246 "name": "BaseBdev3", 00:09:29.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.246 "is_configured": false, 00:09:29.246 "data_offset": 0, 00:09:29.246 "data_size": 0 00:09:29.246 } 00:09:29.246 ] 00:09:29.246 }' 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.246 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.507 03:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.507 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.507 03:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.507 [2024-11-18 03:58:26.015976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.507 BaseBdev2 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.507 [ 00:09:29.507 { 00:09:29.507 "name": "BaseBdev2", 00:09:29.507 "aliases": [ 00:09:29.507 "d763239d-89f1-4559-a077-ca331458c9db" 00:09:29.507 ], 00:09:29.507 "product_name": "Malloc disk", 00:09:29.507 "block_size": 512, 00:09:29.507 "num_blocks": 65536, 00:09:29.507 "uuid": "d763239d-89f1-4559-a077-ca331458c9db", 00:09:29.507 "assigned_rate_limits": { 00:09:29.507 "rw_ios_per_sec": 0, 00:09:29.507 "rw_mbytes_per_sec": 0, 00:09:29.507 "r_mbytes_per_sec": 0, 00:09:29.507 "w_mbytes_per_sec": 0 00:09:29.507 }, 00:09:29.507 "claimed": true, 00:09:29.507 "claim_type": "exclusive_write", 00:09:29.507 "zoned": false, 00:09:29.507 "supported_io_types": { 00:09:29.507 "read": true, 00:09:29.507 "write": true, 00:09:29.507 "unmap": true, 00:09:29.507 "flush": true, 00:09:29.507 "reset": true, 00:09:29.507 "nvme_admin": false, 00:09:29.507 "nvme_io": false, 00:09:29.507 "nvme_io_md": false, 00:09:29.507 "write_zeroes": true, 00:09:29.507 "zcopy": true, 00:09:29.507 "get_zone_info": false, 00:09:29.507 "zone_management": false, 00:09:29.507 "zone_append": false, 00:09:29.507 "compare": false, 00:09:29.507 "compare_and_write": false, 00:09:29.507 "abort": true, 00:09:29.507 "seek_hole": false, 00:09:29.507 "seek_data": false, 00:09:29.507 "copy": true, 00:09:29.507 "nvme_iov_md": false 00:09:29.507 }, 00:09:29.507 "memory_domains": [ 00:09:29.507 { 00:09:29.507 "dma_device_id": "system", 00:09:29.507 "dma_device_type": 1 00:09:29.507 }, 00:09:29.507 { 00:09:29.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.507 "dma_device_type": 2 00:09:29.507 } 00:09:29.507 ], 00:09:29.507 "driver_specific": {} 00:09:29.507 } 00:09:29.507 ] 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.507 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.507 "name": "Existed_Raid", 00:09:29.507 "uuid": "d07ab966-fd67-4728-a098-840bcf19d039", 00:09:29.507 "strip_size_kb": 64, 00:09:29.507 "state": "configuring", 00:09:29.507 "raid_level": "concat", 00:09:29.507 "superblock": true, 00:09:29.507 "num_base_bdevs": 3, 00:09:29.507 "num_base_bdevs_discovered": 2, 00:09:29.507 "num_base_bdevs_operational": 3, 00:09:29.507 "base_bdevs_list": [ 00:09:29.507 { 00:09:29.507 "name": "BaseBdev1", 00:09:29.507 "uuid": "f788abc5-49c4-4294-bbcc-cb81f1b80ab3", 00:09:29.507 "is_configured": true, 00:09:29.507 "data_offset": 2048, 00:09:29.507 "data_size": 63488 00:09:29.507 }, 00:09:29.507 { 00:09:29.507 "name": "BaseBdev2", 00:09:29.507 "uuid": "d763239d-89f1-4559-a077-ca331458c9db", 00:09:29.507 "is_configured": true, 00:09:29.507 "data_offset": 2048, 00:09:29.507 "data_size": 63488 00:09:29.507 }, 00:09:29.507 { 00:09:29.507 "name": "BaseBdev3", 00:09:29.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.507 "is_configured": false, 00:09:29.507 "data_offset": 0, 00:09:29.507 "data_size": 0 00:09:29.507 } 00:09:29.507 ] 00:09:29.507 }' 00:09:29.508 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.508 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.078 [2024-11-18 03:58:26.555191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.078 [2024-11-18 03:58:26.555508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:30.078 [2024-11-18 03:58:26.555536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:30.078 [2024-11-18 03:58:26.555866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:30.078 BaseBdev3 00:09:30.078 [2024-11-18 03:58:26.556057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:30.078 [2024-11-18 03:58:26.556068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:30.078 [2024-11-18 03:58:26.556239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.078 [ 00:09:30.078 { 00:09:30.078 "name": "BaseBdev3", 00:09:30.078 "aliases": [ 00:09:30.078 "3a0b1754-cde1-478f-bfc6-5698282f3b65" 00:09:30.078 ], 00:09:30.078 "product_name": "Malloc disk", 00:09:30.078 "block_size": 512, 00:09:30.078 "num_blocks": 65536, 00:09:30.078 "uuid": "3a0b1754-cde1-478f-bfc6-5698282f3b65", 00:09:30.078 "assigned_rate_limits": { 00:09:30.078 "rw_ios_per_sec": 0, 00:09:30.078 "rw_mbytes_per_sec": 0, 00:09:30.078 "r_mbytes_per_sec": 0, 00:09:30.078 "w_mbytes_per_sec": 0 00:09:30.078 }, 00:09:30.078 "claimed": true, 00:09:30.078 "claim_type": "exclusive_write", 00:09:30.078 "zoned": false, 00:09:30.078 "supported_io_types": { 00:09:30.078 "read": true, 00:09:30.078 "write": true, 00:09:30.078 "unmap": true, 00:09:30.078 "flush": true, 00:09:30.078 "reset": true, 00:09:30.078 "nvme_admin": false, 00:09:30.078 "nvme_io": false, 00:09:30.078 "nvme_io_md": false, 00:09:30.078 "write_zeroes": true, 00:09:30.078 "zcopy": true, 00:09:30.078 "get_zone_info": false, 00:09:30.078 "zone_management": false, 00:09:30.078 "zone_append": false, 00:09:30.078 "compare": false, 00:09:30.078 "compare_and_write": false, 00:09:30.078 "abort": true, 00:09:30.078 "seek_hole": false, 00:09:30.078 "seek_data": false, 00:09:30.078 "copy": true, 00:09:30.078 "nvme_iov_md": false 00:09:30.078 }, 00:09:30.078 "memory_domains": [ 00:09:30.078 { 00:09:30.078 "dma_device_id": "system", 00:09:30.078 "dma_device_type": 1 00:09:30.078 }, 00:09:30.078 { 00:09:30.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.078 "dma_device_type": 2 00:09:30.078 } 00:09:30.078 ], 00:09:30.078 "driver_specific": {} 00:09:30.078 } 00:09:30.078 ] 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.078 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.079 "name": "Existed_Raid", 00:09:30.079 "uuid": "d07ab966-fd67-4728-a098-840bcf19d039", 00:09:30.079 "strip_size_kb": 64, 00:09:30.079 "state": "online", 00:09:30.079 "raid_level": "concat", 00:09:30.079 "superblock": true, 00:09:30.079 "num_base_bdevs": 3, 00:09:30.079 "num_base_bdevs_discovered": 3, 00:09:30.079 "num_base_bdevs_operational": 3, 00:09:30.079 "base_bdevs_list": [ 00:09:30.079 { 00:09:30.079 "name": "BaseBdev1", 00:09:30.079 "uuid": "f788abc5-49c4-4294-bbcc-cb81f1b80ab3", 00:09:30.079 "is_configured": true, 00:09:30.079 "data_offset": 2048, 00:09:30.079 "data_size": 63488 00:09:30.079 }, 00:09:30.079 { 00:09:30.079 "name": "BaseBdev2", 00:09:30.079 "uuid": "d763239d-89f1-4559-a077-ca331458c9db", 00:09:30.079 "is_configured": true, 00:09:30.079 "data_offset": 2048, 00:09:30.079 "data_size": 63488 00:09:30.079 }, 00:09:30.079 { 00:09:30.079 "name": "BaseBdev3", 00:09:30.079 "uuid": "3a0b1754-cde1-478f-bfc6-5698282f3b65", 00:09:30.079 "is_configured": true, 00:09:30.079 "data_offset": 2048, 00:09:30.079 "data_size": 63488 00:09:30.079 } 00:09:30.079 ] 00:09:30.079 }' 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.079 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.339 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.339 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.339 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.339 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.339 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.339 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.339 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.339 03:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.339 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.339 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.339 [2024-11-18 03:58:26.970893] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.600 03:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.600 "name": "Existed_Raid", 00:09:30.600 "aliases": [ 00:09:30.600 "d07ab966-fd67-4728-a098-840bcf19d039" 00:09:30.600 ], 00:09:30.600 "product_name": "Raid Volume", 00:09:30.600 "block_size": 512, 00:09:30.600 "num_blocks": 190464, 00:09:30.600 "uuid": "d07ab966-fd67-4728-a098-840bcf19d039", 00:09:30.600 "assigned_rate_limits": { 00:09:30.600 "rw_ios_per_sec": 0, 00:09:30.600 "rw_mbytes_per_sec": 0, 00:09:30.600 "r_mbytes_per_sec": 0, 00:09:30.600 "w_mbytes_per_sec": 0 00:09:30.600 }, 00:09:30.600 "claimed": false, 00:09:30.600 "zoned": false, 00:09:30.600 "supported_io_types": { 00:09:30.600 "read": true, 00:09:30.600 "write": true, 00:09:30.600 "unmap": true, 00:09:30.600 "flush": true, 00:09:30.600 "reset": true, 00:09:30.600 "nvme_admin": false, 00:09:30.600 "nvme_io": false, 00:09:30.600 "nvme_io_md": false, 00:09:30.600 "write_zeroes": true, 00:09:30.600 "zcopy": false, 00:09:30.600 "get_zone_info": false, 00:09:30.600 "zone_management": false, 00:09:30.600 "zone_append": false, 00:09:30.600 "compare": false, 00:09:30.600 "compare_and_write": false, 00:09:30.600 "abort": false, 00:09:30.600 "seek_hole": false, 00:09:30.600 "seek_data": false, 00:09:30.600 "copy": false, 00:09:30.600 "nvme_iov_md": false 00:09:30.600 }, 00:09:30.600 "memory_domains": [ 00:09:30.600 { 00:09:30.600 "dma_device_id": "system", 00:09:30.600 "dma_device_type": 1 00:09:30.600 }, 00:09:30.600 { 00:09:30.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.600 "dma_device_type": 2 00:09:30.600 }, 00:09:30.600 { 00:09:30.600 "dma_device_id": "system", 00:09:30.600 "dma_device_type": 1 00:09:30.600 }, 00:09:30.600 { 00:09:30.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.600 "dma_device_type": 2 00:09:30.600 }, 00:09:30.600 { 00:09:30.600 "dma_device_id": "system", 00:09:30.600 "dma_device_type": 1 00:09:30.600 }, 00:09:30.600 { 00:09:30.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.600 "dma_device_type": 2 00:09:30.600 } 00:09:30.600 ], 00:09:30.600 "driver_specific": { 00:09:30.600 "raid": { 00:09:30.600 "uuid": "d07ab966-fd67-4728-a098-840bcf19d039", 00:09:30.600 "strip_size_kb": 64, 00:09:30.600 "state": "online", 00:09:30.600 "raid_level": "concat", 00:09:30.600 "superblock": true, 00:09:30.600 "num_base_bdevs": 3, 00:09:30.600 "num_base_bdevs_discovered": 3, 00:09:30.600 "num_base_bdevs_operational": 3, 00:09:30.600 "base_bdevs_list": [ 00:09:30.600 { 00:09:30.600 "name": "BaseBdev1", 00:09:30.600 "uuid": "f788abc5-49c4-4294-bbcc-cb81f1b80ab3", 00:09:30.600 "is_configured": true, 00:09:30.600 "data_offset": 2048, 00:09:30.600 "data_size": 63488 00:09:30.600 }, 00:09:30.600 { 00:09:30.600 "name": "BaseBdev2", 00:09:30.600 "uuid": "d763239d-89f1-4559-a077-ca331458c9db", 00:09:30.600 "is_configured": true, 00:09:30.600 "data_offset": 2048, 00:09:30.600 "data_size": 63488 00:09:30.600 }, 00:09:30.600 { 00:09:30.600 "name": "BaseBdev3", 00:09:30.600 "uuid": "3a0b1754-cde1-478f-bfc6-5698282f3b65", 00:09:30.600 "is_configured": true, 00:09:30.600 "data_offset": 2048, 00:09:30.600 "data_size": 63488 00:09:30.600 } 00:09:30.600 ] 00:09:30.600 } 00:09:30.600 } 00:09:30.600 }' 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:30.600 BaseBdev2 00:09:30.600 BaseBdev3' 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.600 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.600 [2024-11-18 03:58:27.226113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:30.600 [2024-11-18 03:58:27.226157] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.600 [2024-11-18 03:58:27.226215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.861 "name": "Existed_Raid", 00:09:30.861 "uuid": "d07ab966-fd67-4728-a098-840bcf19d039", 00:09:30.861 "strip_size_kb": 64, 00:09:30.861 "state": "offline", 00:09:30.861 "raid_level": "concat", 00:09:30.861 "superblock": true, 00:09:30.861 "num_base_bdevs": 3, 00:09:30.861 "num_base_bdevs_discovered": 2, 00:09:30.861 "num_base_bdevs_operational": 2, 00:09:30.861 "base_bdevs_list": [ 00:09:30.861 { 00:09:30.861 "name": null, 00:09:30.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.861 "is_configured": false, 00:09:30.861 "data_offset": 0, 00:09:30.861 "data_size": 63488 00:09:30.861 }, 00:09:30.861 { 00:09:30.861 "name": "BaseBdev2", 00:09:30.861 "uuid": "d763239d-89f1-4559-a077-ca331458c9db", 00:09:30.861 "is_configured": true, 00:09:30.861 "data_offset": 2048, 00:09:30.861 "data_size": 63488 00:09:30.861 }, 00:09:30.861 { 00:09:30.861 "name": "BaseBdev3", 00:09:30.861 "uuid": "3a0b1754-cde1-478f-bfc6-5698282f3b65", 00:09:30.861 "is_configured": true, 00:09:30.861 "data_offset": 2048, 00:09:30.861 "data_size": 63488 00:09:30.861 } 00:09:30.861 ] 00:09:30.861 }' 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.861 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.121 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:31.121 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.121 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.121 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:31.380 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.381 [2024-11-18 03:58:27.810776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.381 03:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.381 [2024-11-18 03:58:27.975115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:31.381 [2024-11-18 03:58:27.975271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:31.641 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.642 BaseBdev2 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.642 [ 00:09:31.642 { 00:09:31.642 "name": "BaseBdev2", 00:09:31.642 "aliases": [ 00:09:31.642 "7f15468b-c466-47da-815e-f68f4a0faf29" 00:09:31.642 ], 00:09:31.642 "product_name": "Malloc disk", 00:09:31.642 "block_size": 512, 00:09:31.642 "num_blocks": 65536, 00:09:31.642 "uuid": "7f15468b-c466-47da-815e-f68f4a0faf29", 00:09:31.642 "assigned_rate_limits": { 00:09:31.642 "rw_ios_per_sec": 0, 00:09:31.642 "rw_mbytes_per_sec": 0, 00:09:31.642 "r_mbytes_per_sec": 0, 00:09:31.642 "w_mbytes_per_sec": 0 00:09:31.642 }, 00:09:31.642 "claimed": false, 00:09:31.642 "zoned": false, 00:09:31.642 "supported_io_types": { 00:09:31.642 "read": true, 00:09:31.642 "write": true, 00:09:31.642 "unmap": true, 00:09:31.642 "flush": true, 00:09:31.642 "reset": true, 00:09:31.642 "nvme_admin": false, 00:09:31.642 "nvme_io": false, 00:09:31.642 "nvme_io_md": false, 00:09:31.642 "write_zeroes": true, 00:09:31.642 "zcopy": true, 00:09:31.642 "get_zone_info": false, 00:09:31.642 "zone_management": false, 00:09:31.642 "zone_append": false, 00:09:31.642 "compare": false, 00:09:31.642 "compare_and_write": false, 00:09:31.642 "abort": true, 00:09:31.642 "seek_hole": false, 00:09:31.642 "seek_data": false, 00:09:31.642 "copy": true, 00:09:31.642 "nvme_iov_md": false 00:09:31.642 }, 00:09:31.642 "memory_domains": [ 00:09:31.642 { 00:09:31.642 "dma_device_id": "system", 00:09:31.642 "dma_device_type": 1 00:09:31.642 }, 00:09:31.642 { 00:09:31.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.642 "dma_device_type": 2 00:09:31.642 } 00:09:31.642 ], 00:09:31.642 "driver_specific": {} 00:09:31.642 } 00:09:31.642 ] 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.642 BaseBdev3 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.642 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.903 [ 00:09:31.903 { 00:09:31.903 "name": "BaseBdev3", 00:09:31.903 "aliases": [ 00:09:31.903 "3ef9962e-5946-466d-8291-89f7e7f352f3" 00:09:31.903 ], 00:09:31.903 "product_name": "Malloc disk", 00:09:31.903 "block_size": 512, 00:09:31.903 "num_blocks": 65536, 00:09:31.903 "uuid": "3ef9962e-5946-466d-8291-89f7e7f352f3", 00:09:31.903 "assigned_rate_limits": { 00:09:31.903 "rw_ios_per_sec": 0, 00:09:31.903 "rw_mbytes_per_sec": 0, 00:09:31.903 "r_mbytes_per_sec": 0, 00:09:31.903 "w_mbytes_per_sec": 0 00:09:31.903 }, 00:09:31.903 "claimed": false, 00:09:31.903 "zoned": false, 00:09:31.903 "supported_io_types": { 00:09:31.903 "read": true, 00:09:31.903 "write": true, 00:09:31.903 "unmap": true, 00:09:31.903 "flush": true, 00:09:31.903 "reset": true, 00:09:31.903 "nvme_admin": false, 00:09:31.903 "nvme_io": false, 00:09:31.903 "nvme_io_md": false, 00:09:31.903 "write_zeroes": true, 00:09:31.903 "zcopy": true, 00:09:31.903 "get_zone_info": false, 00:09:31.903 "zone_management": false, 00:09:31.903 "zone_append": false, 00:09:31.903 "compare": false, 00:09:31.903 "compare_and_write": false, 00:09:31.903 "abort": true, 00:09:31.903 "seek_hole": false, 00:09:31.903 "seek_data": false, 00:09:31.903 "copy": true, 00:09:31.903 "nvme_iov_md": false 00:09:31.903 }, 00:09:31.903 "memory_domains": [ 00:09:31.903 { 00:09:31.903 "dma_device_id": "system", 00:09:31.903 "dma_device_type": 1 00:09:31.903 }, 00:09:31.903 { 00:09:31.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.903 "dma_device_type": 2 00:09:31.903 } 00:09:31.903 ], 00:09:31.903 "driver_specific": {} 00:09:31.903 } 00:09:31.903 ] 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.903 [2024-11-18 03:58:28.306642] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.903 [2024-11-18 03:58:28.306772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.903 [2024-11-18 03:58:28.306818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.903 [2024-11-18 03:58:28.309039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.903 "name": "Existed_Raid", 00:09:31.903 "uuid": "70c24367-7940-448f-bc4f-d274ac5ba9b7", 00:09:31.903 "strip_size_kb": 64, 00:09:31.903 "state": "configuring", 00:09:31.903 "raid_level": "concat", 00:09:31.903 "superblock": true, 00:09:31.903 "num_base_bdevs": 3, 00:09:31.903 "num_base_bdevs_discovered": 2, 00:09:31.903 "num_base_bdevs_operational": 3, 00:09:31.903 "base_bdevs_list": [ 00:09:31.903 { 00:09:31.903 "name": "BaseBdev1", 00:09:31.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.903 "is_configured": false, 00:09:31.903 "data_offset": 0, 00:09:31.903 "data_size": 0 00:09:31.903 }, 00:09:31.903 { 00:09:31.903 "name": "BaseBdev2", 00:09:31.903 "uuid": "7f15468b-c466-47da-815e-f68f4a0faf29", 00:09:31.903 "is_configured": true, 00:09:31.903 "data_offset": 2048, 00:09:31.903 "data_size": 63488 00:09:31.903 }, 00:09:31.903 { 00:09:31.903 "name": "BaseBdev3", 00:09:31.903 "uuid": "3ef9962e-5946-466d-8291-89f7e7f352f3", 00:09:31.903 "is_configured": true, 00:09:31.903 "data_offset": 2048, 00:09:31.903 "data_size": 63488 00:09:31.903 } 00:09:31.903 ] 00:09:31.903 }' 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.903 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.163 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:32.163 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.163 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.164 [2024-11-18 03:58:28.753913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.164 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.424 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.424 "name": "Existed_Raid", 00:09:32.424 "uuid": "70c24367-7940-448f-bc4f-d274ac5ba9b7", 00:09:32.424 "strip_size_kb": 64, 00:09:32.424 "state": "configuring", 00:09:32.424 "raid_level": "concat", 00:09:32.424 "superblock": true, 00:09:32.424 "num_base_bdevs": 3, 00:09:32.424 "num_base_bdevs_discovered": 1, 00:09:32.424 "num_base_bdevs_operational": 3, 00:09:32.424 "base_bdevs_list": [ 00:09:32.424 { 00:09:32.424 "name": "BaseBdev1", 00:09:32.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.424 "is_configured": false, 00:09:32.424 "data_offset": 0, 00:09:32.424 "data_size": 0 00:09:32.424 }, 00:09:32.424 { 00:09:32.424 "name": null, 00:09:32.424 "uuid": "7f15468b-c466-47da-815e-f68f4a0faf29", 00:09:32.424 "is_configured": false, 00:09:32.424 "data_offset": 0, 00:09:32.424 "data_size": 63488 00:09:32.424 }, 00:09:32.424 { 00:09:32.424 "name": "BaseBdev3", 00:09:32.424 "uuid": "3ef9962e-5946-466d-8291-89f7e7f352f3", 00:09:32.424 "is_configured": true, 00:09:32.424 "data_offset": 2048, 00:09:32.424 "data_size": 63488 00:09:32.424 } 00:09:32.424 ] 00:09:32.424 }' 00:09:32.424 03:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.424 03:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.687 [2024-11-18 03:58:29.225295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.687 BaseBdev1 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.687 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.688 [ 00:09:32.688 { 00:09:32.688 "name": "BaseBdev1", 00:09:32.688 "aliases": [ 00:09:32.688 "db53292b-7cd1-416d-a475-4734a223c180" 00:09:32.688 ], 00:09:32.688 "product_name": "Malloc disk", 00:09:32.688 "block_size": 512, 00:09:32.688 "num_blocks": 65536, 00:09:32.688 "uuid": "db53292b-7cd1-416d-a475-4734a223c180", 00:09:32.688 "assigned_rate_limits": { 00:09:32.688 "rw_ios_per_sec": 0, 00:09:32.688 "rw_mbytes_per_sec": 0, 00:09:32.688 "r_mbytes_per_sec": 0, 00:09:32.688 "w_mbytes_per_sec": 0 00:09:32.688 }, 00:09:32.688 "claimed": true, 00:09:32.688 "claim_type": "exclusive_write", 00:09:32.688 "zoned": false, 00:09:32.688 "supported_io_types": { 00:09:32.688 "read": true, 00:09:32.688 "write": true, 00:09:32.688 "unmap": true, 00:09:32.688 "flush": true, 00:09:32.688 "reset": true, 00:09:32.688 "nvme_admin": false, 00:09:32.688 "nvme_io": false, 00:09:32.688 "nvme_io_md": false, 00:09:32.688 "write_zeroes": true, 00:09:32.688 "zcopy": true, 00:09:32.688 "get_zone_info": false, 00:09:32.688 "zone_management": false, 00:09:32.688 "zone_append": false, 00:09:32.688 "compare": false, 00:09:32.688 "compare_and_write": false, 00:09:32.688 "abort": true, 00:09:32.688 "seek_hole": false, 00:09:32.688 "seek_data": false, 00:09:32.688 "copy": true, 00:09:32.688 "nvme_iov_md": false 00:09:32.688 }, 00:09:32.688 "memory_domains": [ 00:09:32.688 { 00:09:32.688 "dma_device_id": "system", 00:09:32.688 "dma_device_type": 1 00:09:32.688 }, 00:09:32.688 { 00:09:32.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.688 "dma_device_type": 2 00:09:32.688 } 00:09:32.688 ], 00:09:32.688 "driver_specific": {} 00:09:32.688 } 00:09:32.688 ] 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.688 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.689 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.689 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.689 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.689 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.689 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.689 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.689 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.689 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.689 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.689 "name": "Existed_Raid", 00:09:32.689 "uuid": "70c24367-7940-448f-bc4f-d274ac5ba9b7", 00:09:32.689 "strip_size_kb": 64, 00:09:32.689 "state": "configuring", 00:09:32.689 "raid_level": "concat", 00:09:32.689 "superblock": true, 00:09:32.689 "num_base_bdevs": 3, 00:09:32.689 "num_base_bdevs_discovered": 2, 00:09:32.689 "num_base_bdevs_operational": 3, 00:09:32.689 "base_bdevs_list": [ 00:09:32.689 { 00:09:32.689 "name": "BaseBdev1", 00:09:32.689 "uuid": "db53292b-7cd1-416d-a475-4734a223c180", 00:09:32.689 "is_configured": true, 00:09:32.689 "data_offset": 2048, 00:09:32.689 "data_size": 63488 00:09:32.689 }, 00:09:32.689 { 00:09:32.689 "name": null, 00:09:32.689 "uuid": "7f15468b-c466-47da-815e-f68f4a0faf29", 00:09:32.689 "is_configured": false, 00:09:32.689 "data_offset": 0, 00:09:32.689 "data_size": 63488 00:09:32.689 }, 00:09:32.689 { 00:09:32.689 "name": "BaseBdev3", 00:09:32.689 "uuid": "3ef9962e-5946-466d-8291-89f7e7f352f3", 00:09:32.689 "is_configured": true, 00:09:32.689 "data_offset": 2048, 00:09:32.690 "data_size": 63488 00:09:32.690 } 00:09:32.690 ] 00:09:32.690 }' 00:09:32.690 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.690 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.261 [2024-11-18 03:58:29.716536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.261 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.261 "name": "Existed_Raid", 00:09:33.262 "uuid": "70c24367-7940-448f-bc4f-d274ac5ba9b7", 00:09:33.262 "strip_size_kb": 64, 00:09:33.262 "state": "configuring", 00:09:33.262 "raid_level": "concat", 00:09:33.262 "superblock": true, 00:09:33.262 "num_base_bdevs": 3, 00:09:33.262 "num_base_bdevs_discovered": 1, 00:09:33.262 "num_base_bdevs_operational": 3, 00:09:33.262 "base_bdevs_list": [ 00:09:33.262 { 00:09:33.262 "name": "BaseBdev1", 00:09:33.262 "uuid": "db53292b-7cd1-416d-a475-4734a223c180", 00:09:33.262 "is_configured": true, 00:09:33.262 "data_offset": 2048, 00:09:33.262 "data_size": 63488 00:09:33.262 }, 00:09:33.262 { 00:09:33.262 "name": null, 00:09:33.262 "uuid": "7f15468b-c466-47da-815e-f68f4a0faf29", 00:09:33.262 "is_configured": false, 00:09:33.262 "data_offset": 0, 00:09:33.262 "data_size": 63488 00:09:33.262 }, 00:09:33.262 { 00:09:33.262 "name": null, 00:09:33.262 "uuid": "3ef9962e-5946-466d-8291-89f7e7f352f3", 00:09:33.262 "is_configured": false, 00:09:33.262 "data_offset": 0, 00:09:33.262 "data_size": 63488 00:09:33.262 } 00:09:33.262 ] 00:09:33.262 }' 00:09:33.262 03:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.262 03:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.522 [2024-11-18 03:58:30.151952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.522 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.782 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.782 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.782 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.782 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.782 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.782 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.782 "name": "Existed_Raid", 00:09:33.782 "uuid": "70c24367-7940-448f-bc4f-d274ac5ba9b7", 00:09:33.782 "strip_size_kb": 64, 00:09:33.782 "state": "configuring", 00:09:33.782 "raid_level": "concat", 00:09:33.782 "superblock": true, 00:09:33.782 "num_base_bdevs": 3, 00:09:33.782 "num_base_bdevs_discovered": 2, 00:09:33.782 "num_base_bdevs_operational": 3, 00:09:33.782 "base_bdevs_list": [ 00:09:33.782 { 00:09:33.782 "name": "BaseBdev1", 00:09:33.782 "uuid": "db53292b-7cd1-416d-a475-4734a223c180", 00:09:33.782 "is_configured": true, 00:09:33.782 "data_offset": 2048, 00:09:33.782 "data_size": 63488 00:09:33.782 }, 00:09:33.782 { 00:09:33.782 "name": null, 00:09:33.782 "uuid": "7f15468b-c466-47da-815e-f68f4a0faf29", 00:09:33.782 "is_configured": false, 00:09:33.782 "data_offset": 0, 00:09:33.782 "data_size": 63488 00:09:33.782 }, 00:09:33.782 { 00:09:33.782 "name": "BaseBdev3", 00:09:33.782 "uuid": "3ef9962e-5946-466d-8291-89f7e7f352f3", 00:09:33.782 "is_configured": true, 00:09:33.782 "data_offset": 2048, 00:09:33.782 "data_size": 63488 00:09:33.782 } 00:09:33.782 ] 00:09:33.782 }' 00:09:33.782 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.782 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.042 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.042 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.042 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.042 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:34.042 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.042 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:34.042 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:34.042 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.042 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.042 [2024-11-18 03:58:30.667577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.302 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.302 "name": "Existed_Raid", 00:09:34.302 "uuid": "70c24367-7940-448f-bc4f-d274ac5ba9b7", 00:09:34.302 "strip_size_kb": 64, 00:09:34.302 "state": "configuring", 00:09:34.303 "raid_level": "concat", 00:09:34.303 "superblock": true, 00:09:34.303 "num_base_bdevs": 3, 00:09:34.303 "num_base_bdevs_discovered": 1, 00:09:34.303 "num_base_bdevs_operational": 3, 00:09:34.303 "base_bdevs_list": [ 00:09:34.303 { 00:09:34.303 "name": null, 00:09:34.303 "uuid": "db53292b-7cd1-416d-a475-4734a223c180", 00:09:34.303 "is_configured": false, 00:09:34.303 "data_offset": 0, 00:09:34.303 "data_size": 63488 00:09:34.303 }, 00:09:34.303 { 00:09:34.303 "name": null, 00:09:34.303 "uuid": "7f15468b-c466-47da-815e-f68f4a0faf29", 00:09:34.303 "is_configured": false, 00:09:34.303 "data_offset": 0, 00:09:34.303 "data_size": 63488 00:09:34.303 }, 00:09:34.303 { 00:09:34.303 "name": "BaseBdev3", 00:09:34.303 "uuid": "3ef9962e-5946-466d-8291-89f7e7f352f3", 00:09:34.303 "is_configured": true, 00:09:34.303 "data_offset": 2048, 00:09:34.303 "data_size": 63488 00:09:34.303 } 00:09:34.303 ] 00:09:34.303 }' 00:09:34.303 03:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.303 03:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.891 [2024-11-18 03:58:31.260980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.891 "name": "Existed_Raid", 00:09:34.891 "uuid": "70c24367-7940-448f-bc4f-d274ac5ba9b7", 00:09:34.891 "strip_size_kb": 64, 00:09:34.891 "state": "configuring", 00:09:34.891 "raid_level": "concat", 00:09:34.891 "superblock": true, 00:09:34.891 "num_base_bdevs": 3, 00:09:34.891 "num_base_bdevs_discovered": 2, 00:09:34.891 "num_base_bdevs_operational": 3, 00:09:34.891 "base_bdevs_list": [ 00:09:34.891 { 00:09:34.891 "name": null, 00:09:34.891 "uuid": "db53292b-7cd1-416d-a475-4734a223c180", 00:09:34.891 "is_configured": false, 00:09:34.891 "data_offset": 0, 00:09:34.891 "data_size": 63488 00:09:34.891 }, 00:09:34.891 { 00:09:34.891 "name": "BaseBdev2", 00:09:34.891 "uuid": "7f15468b-c466-47da-815e-f68f4a0faf29", 00:09:34.891 "is_configured": true, 00:09:34.891 "data_offset": 2048, 00:09:34.891 "data_size": 63488 00:09:34.891 }, 00:09:34.891 { 00:09:34.891 "name": "BaseBdev3", 00:09:34.891 "uuid": "3ef9962e-5946-466d-8291-89f7e7f352f3", 00:09:34.891 "is_configured": true, 00:09:34.891 "data_offset": 2048, 00:09:34.891 "data_size": 63488 00:09:34.891 } 00:09:34.891 ] 00:09:34.891 }' 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.891 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u db53292b-7cd1-416d-a475-4734a223c180 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.151 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.412 [2024-11-18 03:58:31.799066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:35.412 [2024-11-18 03:58:31.799384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:35.412 NewBaseBdev 00:09:35.412 [2024-11-18 03:58:31.799468] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:35.412 [2024-11-18 03:58:31.799769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:35.412 [2024-11-18 03:58:31.799949] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:35.412 [2024-11-18 03:58:31.799961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:35.412 [2024-11-18 03:58:31.800124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.412 [ 00:09:35.412 { 00:09:35.412 "name": "NewBaseBdev", 00:09:35.412 "aliases": [ 00:09:35.412 "db53292b-7cd1-416d-a475-4734a223c180" 00:09:35.412 ], 00:09:35.412 "product_name": "Malloc disk", 00:09:35.412 "block_size": 512, 00:09:35.412 "num_blocks": 65536, 00:09:35.412 "uuid": "db53292b-7cd1-416d-a475-4734a223c180", 00:09:35.412 "assigned_rate_limits": { 00:09:35.412 "rw_ios_per_sec": 0, 00:09:35.412 "rw_mbytes_per_sec": 0, 00:09:35.412 "r_mbytes_per_sec": 0, 00:09:35.412 "w_mbytes_per_sec": 0 00:09:35.412 }, 00:09:35.412 "claimed": true, 00:09:35.412 "claim_type": "exclusive_write", 00:09:35.412 "zoned": false, 00:09:35.412 "supported_io_types": { 00:09:35.412 "read": true, 00:09:35.412 "write": true, 00:09:35.412 "unmap": true, 00:09:35.412 "flush": true, 00:09:35.412 "reset": true, 00:09:35.412 "nvme_admin": false, 00:09:35.412 "nvme_io": false, 00:09:35.412 "nvme_io_md": false, 00:09:35.412 "write_zeroes": true, 00:09:35.412 "zcopy": true, 00:09:35.412 "get_zone_info": false, 00:09:35.412 "zone_management": false, 00:09:35.412 "zone_append": false, 00:09:35.412 "compare": false, 00:09:35.412 "compare_and_write": false, 00:09:35.412 "abort": true, 00:09:35.412 "seek_hole": false, 00:09:35.412 "seek_data": false, 00:09:35.412 "copy": true, 00:09:35.412 "nvme_iov_md": false 00:09:35.412 }, 00:09:35.412 "memory_domains": [ 00:09:35.412 { 00:09:35.412 "dma_device_id": "system", 00:09:35.412 "dma_device_type": 1 00:09:35.412 }, 00:09:35.412 { 00:09:35.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.412 "dma_device_type": 2 00:09:35.412 } 00:09:35.412 ], 00:09:35.412 "driver_specific": {} 00:09:35.412 } 00:09:35.412 ] 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.412 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.412 "name": "Existed_Raid", 00:09:35.412 "uuid": "70c24367-7940-448f-bc4f-d274ac5ba9b7", 00:09:35.412 "strip_size_kb": 64, 00:09:35.412 "state": "online", 00:09:35.412 "raid_level": "concat", 00:09:35.412 "superblock": true, 00:09:35.412 "num_base_bdevs": 3, 00:09:35.412 "num_base_bdevs_discovered": 3, 00:09:35.412 "num_base_bdevs_operational": 3, 00:09:35.412 "base_bdevs_list": [ 00:09:35.412 { 00:09:35.412 "name": "NewBaseBdev", 00:09:35.412 "uuid": "db53292b-7cd1-416d-a475-4734a223c180", 00:09:35.412 "is_configured": true, 00:09:35.412 "data_offset": 2048, 00:09:35.412 "data_size": 63488 00:09:35.412 }, 00:09:35.413 { 00:09:35.413 "name": "BaseBdev2", 00:09:35.413 "uuid": "7f15468b-c466-47da-815e-f68f4a0faf29", 00:09:35.413 "is_configured": true, 00:09:35.413 "data_offset": 2048, 00:09:35.413 "data_size": 63488 00:09:35.413 }, 00:09:35.413 { 00:09:35.413 "name": "BaseBdev3", 00:09:35.413 "uuid": "3ef9962e-5946-466d-8291-89f7e7f352f3", 00:09:35.413 "is_configured": true, 00:09:35.413 "data_offset": 2048, 00:09:35.413 "data_size": 63488 00:09:35.413 } 00:09:35.413 ] 00:09:35.413 }' 00:09:35.413 03:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.413 03:58:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.673 [2024-11-18 03:58:32.226736] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.673 "name": "Existed_Raid", 00:09:35.673 "aliases": [ 00:09:35.673 "70c24367-7940-448f-bc4f-d274ac5ba9b7" 00:09:35.673 ], 00:09:35.673 "product_name": "Raid Volume", 00:09:35.673 "block_size": 512, 00:09:35.673 "num_blocks": 190464, 00:09:35.673 "uuid": "70c24367-7940-448f-bc4f-d274ac5ba9b7", 00:09:35.673 "assigned_rate_limits": { 00:09:35.673 "rw_ios_per_sec": 0, 00:09:35.673 "rw_mbytes_per_sec": 0, 00:09:35.673 "r_mbytes_per_sec": 0, 00:09:35.673 "w_mbytes_per_sec": 0 00:09:35.673 }, 00:09:35.673 "claimed": false, 00:09:35.673 "zoned": false, 00:09:35.673 "supported_io_types": { 00:09:35.673 "read": true, 00:09:35.673 "write": true, 00:09:35.673 "unmap": true, 00:09:35.673 "flush": true, 00:09:35.673 "reset": true, 00:09:35.673 "nvme_admin": false, 00:09:35.673 "nvme_io": false, 00:09:35.673 "nvme_io_md": false, 00:09:35.673 "write_zeroes": true, 00:09:35.673 "zcopy": false, 00:09:35.673 "get_zone_info": false, 00:09:35.673 "zone_management": false, 00:09:35.673 "zone_append": false, 00:09:35.673 "compare": false, 00:09:35.673 "compare_and_write": false, 00:09:35.673 "abort": false, 00:09:35.673 "seek_hole": false, 00:09:35.673 "seek_data": false, 00:09:35.673 "copy": false, 00:09:35.673 "nvme_iov_md": false 00:09:35.673 }, 00:09:35.673 "memory_domains": [ 00:09:35.673 { 00:09:35.673 "dma_device_id": "system", 00:09:35.673 "dma_device_type": 1 00:09:35.673 }, 00:09:35.673 { 00:09:35.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.673 "dma_device_type": 2 00:09:35.673 }, 00:09:35.673 { 00:09:35.673 "dma_device_id": "system", 00:09:35.673 "dma_device_type": 1 00:09:35.673 }, 00:09:35.673 { 00:09:35.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.673 "dma_device_type": 2 00:09:35.673 }, 00:09:35.673 { 00:09:35.673 "dma_device_id": "system", 00:09:35.673 "dma_device_type": 1 00:09:35.673 }, 00:09:35.673 { 00:09:35.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.673 "dma_device_type": 2 00:09:35.673 } 00:09:35.673 ], 00:09:35.673 "driver_specific": { 00:09:35.673 "raid": { 00:09:35.673 "uuid": "70c24367-7940-448f-bc4f-d274ac5ba9b7", 00:09:35.673 "strip_size_kb": 64, 00:09:35.673 "state": "online", 00:09:35.673 "raid_level": "concat", 00:09:35.673 "superblock": true, 00:09:35.673 "num_base_bdevs": 3, 00:09:35.673 "num_base_bdevs_discovered": 3, 00:09:35.673 "num_base_bdevs_operational": 3, 00:09:35.673 "base_bdevs_list": [ 00:09:35.673 { 00:09:35.673 "name": "NewBaseBdev", 00:09:35.673 "uuid": "db53292b-7cd1-416d-a475-4734a223c180", 00:09:35.673 "is_configured": true, 00:09:35.673 "data_offset": 2048, 00:09:35.673 "data_size": 63488 00:09:35.673 }, 00:09:35.673 { 00:09:35.673 "name": "BaseBdev2", 00:09:35.673 "uuid": "7f15468b-c466-47da-815e-f68f4a0faf29", 00:09:35.673 "is_configured": true, 00:09:35.673 "data_offset": 2048, 00:09:35.673 "data_size": 63488 00:09:35.673 }, 00:09:35.673 { 00:09:35.673 "name": "BaseBdev3", 00:09:35.673 "uuid": "3ef9962e-5946-466d-8291-89f7e7f352f3", 00:09:35.673 "is_configured": true, 00:09:35.673 "data_offset": 2048, 00:09:35.673 "data_size": 63488 00:09:35.673 } 00:09:35.673 ] 00:09:35.673 } 00:09:35.673 } 00:09:35.673 }' 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:35.673 BaseBdev2 00:09:35.673 BaseBdev3' 00:09:35.673 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.933 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.934 [2024-11-18 03:58:32.505954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.934 [2024-11-18 03:58:32.506074] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.934 [2024-11-18 03:58:32.506177] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.934 [2024-11-18 03:58:32.506242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.934 [2024-11-18 03:58:32.506256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66208 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66208 ']' 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66208 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66208 00:09:35.934 killing process with pid 66208 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66208' 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66208 00:09:35.934 [2024-11-18 03:58:32.554314] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.934 03:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66208 00:09:36.503 [2024-11-18 03:58:32.889985] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.885 03:58:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:37.885 00:09:37.885 real 0m10.446s 00:09:37.885 user 0m16.385s 00:09:37.885 sys 0m1.727s 00:09:37.885 ************************************ 00:09:37.885 END TEST raid_state_function_test_sb 00:09:37.885 ************************************ 00:09:37.885 03:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.885 03:58:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.885 03:58:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:37.885 03:58:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:37.885 03:58:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.885 03:58:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.885 ************************************ 00:09:37.885 START TEST raid_superblock_test 00:09:37.885 ************************************ 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66827 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66827 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66827 ']' 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.885 03:58:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.885 [2024-11-18 03:58:34.273979] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:37.885 [2024-11-18 03:58:34.274197] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66827 ] 00:09:37.885 [2024-11-18 03:58:34.453938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.145 [2024-11-18 03:58:34.595428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.405 [2024-11-18 03:58:34.839509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.405 [2024-11-18 03:58:34.839670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.665 malloc1 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.665 [2024-11-18 03:58:35.146988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:38.665 [2024-11-18 03:58:35.147071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.665 [2024-11-18 03:58:35.147110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:38.665 [2024-11-18 03:58:35.147120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.665 [2024-11-18 03:58:35.149635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.665 [2024-11-18 03:58:35.149776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:38.665 pt1 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.665 malloc2 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.665 [2024-11-18 03:58:35.210007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:38.665 [2024-11-18 03:58:35.210160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.665 [2024-11-18 03:58:35.210202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:38.665 [2024-11-18 03:58:35.210231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.665 [2024-11-18 03:58:35.212748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.665 [2024-11-18 03:58:35.212850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:38.665 pt2 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.665 malloc3 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:38.665 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.666 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.666 [2024-11-18 03:58:35.291344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:38.666 [2024-11-18 03:58:35.291501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.666 [2024-11-18 03:58:35.291543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:38.666 [2024-11-18 03:58:35.291576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.666 [2024-11-18 03:58:35.293973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.666 [2024-11-18 03:58:35.294051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:38.666 pt3 00:09:38.666 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.666 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:38.666 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.666 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:38.666 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.666 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.666 [2024-11-18 03:58:35.303377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:38.926 [2024-11-18 03:58:35.305487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:38.926 [2024-11-18 03:58:35.305553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:38.926 [2024-11-18 03:58:35.305722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:38.926 [2024-11-18 03:58:35.305738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:38.926 [2024-11-18 03:58:35.306030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:38.926 [2024-11-18 03:58:35.306205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:38.926 [2024-11-18 03:58:35.306222] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:38.926 [2024-11-18 03:58:35.306364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.926 "name": "raid_bdev1", 00:09:38.926 "uuid": "20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad", 00:09:38.926 "strip_size_kb": 64, 00:09:38.926 "state": "online", 00:09:38.926 "raid_level": "concat", 00:09:38.926 "superblock": true, 00:09:38.926 "num_base_bdevs": 3, 00:09:38.926 "num_base_bdevs_discovered": 3, 00:09:38.926 "num_base_bdevs_operational": 3, 00:09:38.926 "base_bdevs_list": [ 00:09:38.926 { 00:09:38.926 "name": "pt1", 00:09:38.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.926 "is_configured": true, 00:09:38.926 "data_offset": 2048, 00:09:38.926 "data_size": 63488 00:09:38.926 }, 00:09:38.926 { 00:09:38.926 "name": "pt2", 00:09:38.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.926 "is_configured": true, 00:09:38.926 "data_offset": 2048, 00:09:38.926 "data_size": 63488 00:09:38.926 }, 00:09:38.926 { 00:09:38.926 "name": "pt3", 00:09:38.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:38.926 "is_configured": true, 00:09:38.926 "data_offset": 2048, 00:09:38.926 "data_size": 63488 00:09:38.926 } 00:09:38.926 ] 00:09:38.926 }' 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.926 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.186 [2024-11-18 03:58:35.699115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.186 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.186 "name": "raid_bdev1", 00:09:39.186 "aliases": [ 00:09:39.186 "20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad" 00:09:39.186 ], 00:09:39.186 "product_name": "Raid Volume", 00:09:39.186 "block_size": 512, 00:09:39.186 "num_blocks": 190464, 00:09:39.186 "uuid": "20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad", 00:09:39.186 "assigned_rate_limits": { 00:09:39.186 "rw_ios_per_sec": 0, 00:09:39.186 "rw_mbytes_per_sec": 0, 00:09:39.186 "r_mbytes_per_sec": 0, 00:09:39.186 "w_mbytes_per_sec": 0 00:09:39.186 }, 00:09:39.186 "claimed": false, 00:09:39.186 "zoned": false, 00:09:39.186 "supported_io_types": { 00:09:39.186 "read": true, 00:09:39.186 "write": true, 00:09:39.186 "unmap": true, 00:09:39.186 "flush": true, 00:09:39.186 "reset": true, 00:09:39.186 "nvme_admin": false, 00:09:39.186 "nvme_io": false, 00:09:39.186 "nvme_io_md": false, 00:09:39.186 "write_zeroes": true, 00:09:39.186 "zcopy": false, 00:09:39.186 "get_zone_info": false, 00:09:39.186 "zone_management": false, 00:09:39.186 "zone_append": false, 00:09:39.186 "compare": false, 00:09:39.186 "compare_and_write": false, 00:09:39.186 "abort": false, 00:09:39.186 "seek_hole": false, 00:09:39.186 "seek_data": false, 00:09:39.186 "copy": false, 00:09:39.186 "nvme_iov_md": false 00:09:39.186 }, 00:09:39.186 "memory_domains": [ 00:09:39.186 { 00:09:39.186 "dma_device_id": "system", 00:09:39.187 "dma_device_type": 1 00:09:39.187 }, 00:09:39.187 { 00:09:39.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.187 "dma_device_type": 2 00:09:39.187 }, 00:09:39.187 { 00:09:39.187 "dma_device_id": "system", 00:09:39.187 "dma_device_type": 1 00:09:39.187 }, 00:09:39.187 { 00:09:39.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.187 "dma_device_type": 2 00:09:39.187 }, 00:09:39.187 { 00:09:39.187 "dma_device_id": "system", 00:09:39.187 "dma_device_type": 1 00:09:39.187 }, 00:09:39.187 { 00:09:39.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.187 "dma_device_type": 2 00:09:39.187 } 00:09:39.187 ], 00:09:39.187 "driver_specific": { 00:09:39.187 "raid": { 00:09:39.187 "uuid": "20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad", 00:09:39.187 "strip_size_kb": 64, 00:09:39.187 "state": "online", 00:09:39.187 "raid_level": "concat", 00:09:39.187 "superblock": true, 00:09:39.187 "num_base_bdevs": 3, 00:09:39.187 "num_base_bdevs_discovered": 3, 00:09:39.187 "num_base_bdevs_operational": 3, 00:09:39.187 "base_bdevs_list": [ 00:09:39.187 { 00:09:39.187 "name": "pt1", 00:09:39.187 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.187 "is_configured": true, 00:09:39.187 "data_offset": 2048, 00:09:39.187 "data_size": 63488 00:09:39.187 }, 00:09:39.187 { 00:09:39.187 "name": "pt2", 00:09:39.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.187 "is_configured": true, 00:09:39.187 "data_offset": 2048, 00:09:39.187 "data_size": 63488 00:09:39.187 }, 00:09:39.187 { 00:09:39.187 "name": "pt3", 00:09:39.187 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.187 "is_configured": true, 00:09:39.187 "data_offset": 2048, 00:09:39.187 "data_size": 63488 00:09:39.187 } 00:09:39.187 ] 00:09:39.187 } 00:09:39.187 } 00:09:39.187 }' 00:09:39.187 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.187 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:39.187 pt2 00:09:39.187 pt3' 00:09:39.187 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.187 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.187 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.187 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:39.187 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.187 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.187 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.447 [2024-11-18 03:58:35.970483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad ']' 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.447 03:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.447 [2024-11-18 03:58:36.002148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.447 [2024-11-18 03:58:36.002179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.447 [2024-11-18 03:58:36.002262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.447 [2024-11-18 03:58:36.002332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.447 [2024-11-18 03:58:36.002342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:39.447 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.447 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.447 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:39.447 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.447 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.447 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.448 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.708 [2024-11-18 03:58:36.157979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:39.708 [2024-11-18 03:58:36.160238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:39.708 [2024-11-18 03:58:36.160346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:39.708 [2024-11-18 03:58:36.160421] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:39.708 [2024-11-18 03:58:36.160530] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:39.708 [2024-11-18 03:58:36.160550] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:39.708 [2024-11-18 03:58:36.160567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.708 [2024-11-18 03:58:36.160578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:39.708 request: 00:09:39.708 { 00:09:39.708 "name": "raid_bdev1", 00:09:39.708 "raid_level": "concat", 00:09:39.708 "base_bdevs": [ 00:09:39.708 "malloc1", 00:09:39.708 "malloc2", 00:09:39.708 "malloc3" 00:09:39.708 ], 00:09:39.708 "strip_size_kb": 64, 00:09:39.708 "superblock": false, 00:09:39.708 "method": "bdev_raid_create", 00:09:39.708 "req_id": 1 00:09:39.708 } 00:09:39.708 Got JSON-RPC error response 00:09:39.708 response: 00:09:39.708 { 00:09:39.708 "code": -17, 00:09:39.708 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:39.708 } 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.708 [2024-11-18 03:58:36.225760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.708 [2024-11-18 03:58:36.225870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.708 [2024-11-18 03:58:36.225895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:39.708 [2024-11-18 03:58:36.225905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.708 [2024-11-18 03:58:36.228377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.708 [2024-11-18 03:58:36.228413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.708 [2024-11-18 03:58:36.228497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:39.708 [2024-11-18 03:58:36.228553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.708 pt1 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.708 "name": "raid_bdev1", 00:09:39.708 "uuid": "20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad", 00:09:39.708 "strip_size_kb": 64, 00:09:39.708 "state": "configuring", 00:09:39.708 "raid_level": "concat", 00:09:39.708 "superblock": true, 00:09:39.708 "num_base_bdevs": 3, 00:09:39.708 "num_base_bdevs_discovered": 1, 00:09:39.708 "num_base_bdevs_operational": 3, 00:09:39.708 "base_bdevs_list": [ 00:09:39.708 { 00:09:39.708 "name": "pt1", 00:09:39.708 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.708 "is_configured": true, 00:09:39.708 "data_offset": 2048, 00:09:39.708 "data_size": 63488 00:09:39.708 }, 00:09:39.708 { 00:09:39.708 "name": null, 00:09:39.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.708 "is_configured": false, 00:09:39.708 "data_offset": 2048, 00:09:39.708 "data_size": 63488 00:09:39.708 }, 00:09:39.708 { 00:09:39.708 "name": null, 00:09:39.708 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.708 "is_configured": false, 00:09:39.708 "data_offset": 2048, 00:09:39.708 "data_size": 63488 00:09:39.708 } 00:09:39.708 ] 00:09:39.708 }' 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.708 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.279 [2024-11-18 03:58:36.665063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:40.279 [2024-11-18 03:58:36.665246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.279 [2024-11-18 03:58:36.665292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:40.279 [2024-11-18 03:58:36.665323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.279 [2024-11-18 03:58:36.665910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.279 [2024-11-18 03:58:36.665974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:40.279 [2024-11-18 03:58:36.666108] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:40.279 [2024-11-18 03:58:36.666162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:40.279 pt2 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.279 [2024-11-18 03:58:36.677036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.279 "name": "raid_bdev1", 00:09:40.279 "uuid": "20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad", 00:09:40.279 "strip_size_kb": 64, 00:09:40.279 "state": "configuring", 00:09:40.279 "raid_level": "concat", 00:09:40.279 "superblock": true, 00:09:40.279 "num_base_bdevs": 3, 00:09:40.279 "num_base_bdevs_discovered": 1, 00:09:40.279 "num_base_bdevs_operational": 3, 00:09:40.279 "base_bdevs_list": [ 00:09:40.279 { 00:09:40.279 "name": "pt1", 00:09:40.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.279 "is_configured": true, 00:09:40.279 "data_offset": 2048, 00:09:40.279 "data_size": 63488 00:09:40.279 }, 00:09:40.279 { 00:09:40.279 "name": null, 00:09:40.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.279 "is_configured": false, 00:09:40.279 "data_offset": 0, 00:09:40.279 "data_size": 63488 00:09:40.279 }, 00:09:40.279 { 00:09:40.279 "name": null, 00:09:40.279 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.279 "is_configured": false, 00:09:40.279 "data_offset": 2048, 00:09:40.279 "data_size": 63488 00:09:40.279 } 00:09:40.279 ] 00:09:40.279 }' 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.279 03:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.552 [2024-11-18 03:58:37.128279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:40.552 [2024-11-18 03:58:37.128386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.552 [2024-11-18 03:58:37.128411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:40.552 [2024-11-18 03:58:37.128425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.552 [2024-11-18 03:58:37.129036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.552 [2024-11-18 03:58:37.129061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:40.552 [2024-11-18 03:58:37.129162] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:40.552 [2024-11-18 03:58:37.129190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:40.552 pt2 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.552 [2024-11-18 03:58:37.136208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:40.552 [2024-11-18 03:58:37.136262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.552 [2024-11-18 03:58:37.136278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:40.552 [2024-11-18 03:58:37.136289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.552 [2024-11-18 03:58:37.136716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.552 [2024-11-18 03:58:37.136738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:40.552 [2024-11-18 03:58:37.136804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:40.552 [2024-11-18 03:58:37.136826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:40.552 [2024-11-18 03:58:37.136964] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:40.552 [2024-11-18 03:58:37.136977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:40.552 [2024-11-18 03:58:37.137243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:40.552 [2024-11-18 03:58:37.137391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:40.552 [2024-11-18 03:58:37.137406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:40.552 [2024-11-18 03:58:37.137560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.552 pt3 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.552 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.829 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.829 "name": "raid_bdev1", 00:09:40.829 "uuid": "20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad", 00:09:40.829 "strip_size_kb": 64, 00:09:40.829 "state": "online", 00:09:40.829 "raid_level": "concat", 00:09:40.829 "superblock": true, 00:09:40.829 "num_base_bdevs": 3, 00:09:40.829 "num_base_bdevs_discovered": 3, 00:09:40.829 "num_base_bdevs_operational": 3, 00:09:40.829 "base_bdevs_list": [ 00:09:40.829 { 00:09:40.829 "name": "pt1", 00:09:40.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.829 "is_configured": true, 00:09:40.829 "data_offset": 2048, 00:09:40.829 "data_size": 63488 00:09:40.829 }, 00:09:40.829 { 00:09:40.829 "name": "pt2", 00:09:40.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.829 "is_configured": true, 00:09:40.829 "data_offset": 2048, 00:09:40.829 "data_size": 63488 00:09:40.829 }, 00:09:40.829 { 00:09:40.829 "name": "pt3", 00:09:40.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.829 "is_configured": true, 00:09:40.829 "data_offset": 2048, 00:09:40.829 "data_size": 63488 00:09:40.829 } 00:09:40.829 ] 00:09:40.829 }' 00:09:40.829 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.829 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.089 [2024-11-18 03:58:37.619818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.089 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.089 "name": "raid_bdev1", 00:09:41.089 "aliases": [ 00:09:41.089 "20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad" 00:09:41.089 ], 00:09:41.089 "product_name": "Raid Volume", 00:09:41.089 "block_size": 512, 00:09:41.089 "num_blocks": 190464, 00:09:41.089 "uuid": "20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad", 00:09:41.089 "assigned_rate_limits": { 00:09:41.089 "rw_ios_per_sec": 0, 00:09:41.089 "rw_mbytes_per_sec": 0, 00:09:41.089 "r_mbytes_per_sec": 0, 00:09:41.089 "w_mbytes_per_sec": 0 00:09:41.089 }, 00:09:41.089 "claimed": false, 00:09:41.089 "zoned": false, 00:09:41.089 "supported_io_types": { 00:09:41.089 "read": true, 00:09:41.089 "write": true, 00:09:41.089 "unmap": true, 00:09:41.089 "flush": true, 00:09:41.089 "reset": true, 00:09:41.089 "nvme_admin": false, 00:09:41.089 "nvme_io": false, 00:09:41.089 "nvme_io_md": false, 00:09:41.090 "write_zeroes": true, 00:09:41.090 "zcopy": false, 00:09:41.090 "get_zone_info": false, 00:09:41.090 "zone_management": false, 00:09:41.090 "zone_append": false, 00:09:41.090 "compare": false, 00:09:41.090 "compare_and_write": false, 00:09:41.090 "abort": false, 00:09:41.090 "seek_hole": false, 00:09:41.090 "seek_data": false, 00:09:41.090 "copy": false, 00:09:41.090 "nvme_iov_md": false 00:09:41.090 }, 00:09:41.090 "memory_domains": [ 00:09:41.090 { 00:09:41.090 "dma_device_id": "system", 00:09:41.090 "dma_device_type": 1 00:09:41.090 }, 00:09:41.090 { 00:09:41.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.090 "dma_device_type": 2 00:09:41.090 }, 00:09:41.090 { 00:09:41.090 "dma_device_id": "system", 00:09:41.090 "dma_device_type": 1 00:09:41.090 }, 00:09:41.090 { 00:09:41.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.090 "dma_device_type": 2 00:09:41.090 }, 00:09:41.090 { 00:09:41.090 "dma_device_id": "system", 00:09:41.090 "dma_device_type": 1 00:09:41.090 }, 00:09:41.090 { 00:09:41.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.090 "dma_device_type": 2 00:09:41.090 } 00:09:41.090 ], 00:09:41.090 "driver_specific": { 00:09:41.090 "raid": { 00:09:41.090 "uuid": "20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad", 00:09:41.090 "strip_size_kb": 64, 00:09:41.090 "state": "online", 00:09:41.090 "raid_level": "concat", 00:09:41.090 "superblock": true, 00:09:41.090 "num_base_bdevs": 3, 00:09:41.090 "num_base_bdevs_discovered": 3, 00:09:41.090 "num_base_bdevs_operational": 3, 00:09:41.090 "base_bdevs_list": [ 00:09:41.090 { 00:09:41.090 "name": "pt1", 00:09:41.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.090 "is_configured": true, 00:09:41.090 "data_offset": 2048, 00:09:41.090 "data_size": 63488 00:09:41.090 }, 00:09:41.090 { 00:09:41.090 "name": "pt2", 00:09:41.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.090 "is_configured": true, 00:09:41.090 "data_offset": 2048, 00:09:41.090 "data_size": 63488 00:09:41.090 }, 00:09:41.090 { 00:09:41.090 "name": "pt3", 00:09:41.090 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.090 "is_configured": true, 00:09:41.090 "data_offset": 2048, 00:09:41.090 "data_size": 63488 00:09:41.090 } 00:09:41.090 ] 00:09:41.090 } 00:09:41.090 } 00:09:41.090 }' 00:09:41.090 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.090 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:41.090 pt2 00:09:41.090 pt3' 00:09:41.090 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.350 [2024-11-18 03:58:37.907235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad '!=' 20a93a3e-6bcb-46c1-bca7-6fd9f4a3dfad ']' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66827 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66827 ']' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66827 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.350 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66827 00:09:41.609 killing process with pid 66827 00:09:41.609 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.609 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.609 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66827' 00:09:41.609 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66827 00:09:41.609 [2024-11-18 03:58:37.992475] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.609 03:58:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66827 00:09:41.609 [2024-11-18 03:58:37.992603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.609 [2024-11-18 03:58:37.992675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.609 [2024-11-18 03:58:37.992689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:41.869 [2024-11-18 03:58:38.326922] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.248 03:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:43.248 00:09:43.248 real 0m5.377s 00:09:43.248 user 0m7.501s 00:09:43.248 sys 0m0.998s 00:09:43.248 ************************************ 00:09:43.248 END TEST raid_superblock_test 00:09:43.248 ************************************ 00:09:43.248 03:58:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.248 03:58:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.248 03:58:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:43.248 03:58:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:43.248 03:58:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.248 03:58:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.248 ************************************ 00:09:43.248 START TEST raid_read_error_test 00:09:43.248 ************************************ 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4i8Y5v2ECn 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67090 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67090 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67090 ']' 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.248 03:58:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.248 [2024-11-18 03:58:39.729280] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:43.249 [2024-11-18 03:58:39.729497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67090 ] 00:09:43.508 [2024-11-18 03:58:39.905373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.508 [2024-11-18 03:58:40.046889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.768 [2024-11-18 03:58:40.284884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.768 [2024-11-18 03:58:40.285063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.028 BaseBdev1_malloc 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.028 true 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.028 [2024-11-18 03:58:40.613422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:44.028 [2024-11-18 03:58:40.613567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.028 [2024-11-18 03:58:40.613606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:44.028 [2024-11-18 03:58:40.613638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.028 [2024-11-18 03:58:40.616144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.028 [2024-11-18 03:58:40.616224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:44.028 BaseBdev1 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.028 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.288 BaseBdev2_malloc 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.288 true 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.288 [2024-11-18 03:58:40.691696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:44.288 [2024-11-18 03:58:40.691852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.288 [2024-11-18 03:58:40.691909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:44.288 [2024-11-18 03:58:40.691927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.288 [2024-11-18 03:58:40.694396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.288 [2024-11-18 03:58:40.694439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:44.288 BaseBdev2 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.288 BaseBdev3_malloc 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.288 true 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.288 [2024-11-18 03:58:40.774215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:44.288 [2024-11-18 03:58:40.774348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.288 [2024-11-18 03:58:40.774382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:44.288 [2024-11-18 03:58:40.774412] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.288 [2024-11-18 03:58:40.776849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.288 [2024-11-18 03:58:40.776924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:44.288 BaseBdev3 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.288 [2024-11-18 03:58:40.786275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.288 [2024-11-18 03:58:40.788431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.288 [2024-11-18 03:58:40.788557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.288 [2024-11-18 03:58:40.788787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:44.288 [2024-11-18 03:58:40.788847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:44.288 [2024-11-18 03:58:40.789123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:44.288 [2024-11-18 03:58:40.789321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:44.288 [2024-11-18 03:58:40.789367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:44.288 [2024-11-18 03:58:40.789554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.288 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.288 "name": "raid_bdev1", 00:09:44.288 "uuid": "d31cd883-7fa3-4fb3-810a-49424835f834", 00:09:44.288 "strip_size_kb": 64, 00:09:44.289 "state": "online", 00:09:44.289 "raid_level": "concat", 00:09:44.289 "superblock": true, 00:09:44.289 "num_base_bdevs": 3, 00:09:44.289 "num_base_bdevs_discovered": 3, 00:09:44.289 "num_base_bdevs_operational": 3, 00:09:44.289 "base_bdevs_list": [ 00:09:44.289 { 00:09:44.289 "name": "BaseBdev1", 00:09:44.289 "uuid": "ff3a4007-e937-53fd-b72c-49ac4e879d51", 00:09:44.289 "is_configured": true, 00:09:44.289 "data_offset": 2048, 00:09:44.289 "data_size": 63488 00:09:44.289 }, 00:09:44.289 { 00:09:44.289 "name": "BaseBdev2", 00:09:44.289 "uuid": "ee8159ef-7d69-5988-af53-e022a303a5ab", 00:09:44.289 "is_configured": true, 00:09:44.289 "data_offset": 2048, 00:09:44.289 "data_size": 63488 00:09:44.289 }, 00:09:44.289 { 00:09:44.289 "name": "BaseBdev3", 00:09:44.289 "uuid": "9fe9604f-a0f6-5f32-ad01-b2788d5452be", 00:09:44.289 "is_configured": true, 00:09:44.289 "data_offset": 2048, 00:09:44.289 "data_size": 63488 00:09:44.289 } 00:09:44.289 ] 00:09:44.289 }' 00:09:44.289 03:58:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.289 03:58:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.859 03:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:44.859 03:58:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:44.859 [2024-11-18 03:58:41.350898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.798 "name": "raid_bdev1", 00:09:45.798 "uuid": "d31cd883-7fa3-4fb3-810a-49424835f834", 00:09:45.798 "strip_size_kb": 64, 00:09:45.798 "state": "online", 00:09:45.798 "raid_level": "concat", 00:09:45.798 "superblock": true, 00:09:45.798 "num_base_bdevs": 3, 00:09:45.798 "num_base_bdevs_discovered": 3, 00:09:45.798 "num_base_bdevs_operational": 3, 00:09:45.798 "base_bdevs_list": [ 00:09:45.798 { 00:09:45.798 "name": "BaseBdev1", 00:09:45.798 "uuid": "ff3a4007-e937-53fd-b72c-49ac4e879d51", 00:09:45.798 "is_configured": true, 00:09:45.798 "data_offset": 2048, 00:09:45.798 "data_size": 63488 00:09:45.798 }, 00:09:45.798 { 00:09:45.798 "name": "BaseBdev2", 00:09:45.798 "uuid": "ee8159ef-7d69-5988-af53-e022a303a5ab", 00:09:45.798 "is_configured": true, 00:09:45.798 "data_offset": 2048, 00:09:45.798 "data_size": 63488 00:09:45.798 }, 00:09:45.798 { 00:09:45.798 "name": "BaseBdev3", 00:09:45.798 "uuid": "9fe9604f-a0f6-5f32-ad01-b2788d5452be", 00:09:45.798 "is_configured": true, 00:09:45.798 "data_offset": 2048, 00:09:45.798 "data_size": 63488 00:09:45.798 } 00:09:45.798 ] 00:09:45.798 }' 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.798 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.370 [2024-11-18 03:58:42.724001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.370 [2024-11-18 03:58:42.724131] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.370 [2024-11-18 03:58:42.726957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.370 [2024-11-18 03:58:42.727004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.370 [2024-11-18 03:58:42.727047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.370 [2024-11-18 03:58:42.727060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:46.370 { 00:09:46.370 "results": [ 00:09:46.370 { 00:09:46.370 "job": "raid_bdev1", 00:09:46.370 "core_mask": "0x1", 00:09:46.370 "workload": "randrw", 00:09:46.370 "percentage": 50, 00:09:46.370 "status": "finished", 00:09:46.370 "queue_depth": 1, 00:09:46.370 "io_size": 131072, 00:09:46.370 "runtime": 1.373669, 00:09:46.370 "iops": 13752.221241070447, 00:09:46.370 "mibps": 1719.0276551338059, 00:09:46.370 "io_failed": 1, 00:09:46.370 "io_timeout": 0, 00:09:46.370 "avg_latency_us": 102.51609821675403, 00:09:46.370 "min_latency_us": 24.705676855895195, 00:09:46.370 "max_latency_us": 1502.46288209607 00:09:46.370 } 00:09:46.370 ], 00:09:46.370 "core_count": 1 00:09:46.370 } 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67090 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67090 ']' 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67090 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67090 00:09:46.370 killing process with pid 67090 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67090' 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67090 00:09:46.370 [2024-11-18 03:58:42.770015] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.370 03:58:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67090 00:09:46.649 [2024-11-18 03:58:43.028294] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.027 03:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4i8Y5v2ECn 00:09:48.027 03:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:48.027 03:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:48.027 03:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:48.027 ************************************ 00:09:48.027 END TEST raid_read_error_test 00:09:48.027 ************************************ 00:09:48.027 03:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:48.027 03:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.027 03:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:48.027 03:58:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:48.027 00:09:48.027 real 0m4.703s 00:09:48.027 user 0m5.475s 00:09:48.027 sys 0m0.643s 00:09:48.027 03:58:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.027 03:58:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.028 03:58:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:48.028 03:58:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:48.028 03:58:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.028 03:58:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.028 ************************************ 00:09:48.028 START TEST raid_write_error_test 00:09:48.028 ************************************ 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cqmPFlSXtD 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67230 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67230 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67230 ']' 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.028 03:58:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.028 [2024-11-18 03:58:44.503064] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:48.028 [2024-11-18 03:58:44.503180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67230 ] 00:09:48.288 [2024-11-18 03:58:44.677964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.288 [2024-11-18 03:58:44.820388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.547 [2024-11-18 03:58:45.064890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.547 [2024-11-18 03:58:45.064936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.807 BaseBdev1_malloc 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.807 true 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.807 [2024-11-18 03:58:45.389817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:48.807 [2024-11-18 03:58:45.389975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.807 [2024-11-18 03:58:45.390013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:48.807 [2024-11-18 03:58:45.390044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.807 [2024-11-18 03:58:45.392464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.807 [2024-11-18 03:58:45.392544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:48.807 BaseBdev1 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.807 BaseBdev2_malloc 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.807 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.067 true 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.067 [2024-11-18 03:58:45.465008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.067 [2024-11-18 03:58:45.465154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.067 [2024-11-18 03:58:45.465189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.067 [2024-11-18 03:58:45.465220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.067 [2024-11-18 03:58:45.467655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.067 [2024-11-18 03:58:45.467734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.067 BaseBdev2 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.067 BaseBdev3_malloc 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.067 true 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.067 [2024-11-18 03:58:45.554089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:49.067 [2024-11-18 03:58:45.554221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.067 [2024-11-18 03:58:45.554256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:49.067 [2024-11-18 03:58:45.554286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.067 [2024-11-18 03:58:45.556685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.067 [2024-11-18 03:58:45.556766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:49.067 BaseBdev3 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.067 [2024-11-18 03:58:45.566178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.067 [2024-11-18 03:58:45.568563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.067 [2024-11-18 03:58:45.568694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.067 [2024-11-18 03:58:45.568934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:49.067 [2024-11-18 03:58:45.568947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:49.067 [2024-11-18 03:58:45.569236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:49.067 [2024-11-18 03:58:45.569410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:49.067 [2024-11-18 03:58:45.569425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:49.067 [2024-11-18 03:58:45.569625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.067 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.068 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.068 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.068 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.068 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.068 "name": "raid_bdev1", 00:09:49.068 "uuid": "a157aa3a-f6d5-464c-99ab-f1730f1256b3", 00:09:49.068 "strip_size_kb": 64, 00:09:49.068 "state": "online", 00:09:49.068 "raid_level": "concat", 00:09:49.068 "superblock": true, 00:09:49.068 "num_base_bdevs": 3, 00:09:49.068 "num_base_bdevs_discovered": 3, 00:09:49.068 "num_base_bdevs_operational": 3, 00:09:49.068 "base_bdevs_list": [ 00:09:49.068 { 00:09:49.068 "name": "BaseBdev1", 00:09:49.068 "uuid": "d9a7db71-7e9d-58f7-8661-4b152f8dea5a", 00:09:49.068 "is_configured": true, 00:09:49.068 "data_offset": 2048, 00:09:49.068 "data_size": 63488 00:09:49.068 }, 00:09:49.068 { 00:09:49.068 "name": "BaseBdev2", 00:09:49.068 "uuid": "e3198db9-58fb-5484-84d5-28eb86654e61", 00:09:49.068 "is_configured": true, 00:09:49.068 "data_offset": 2048, 00:09:49.068 "data_size": 63488 00:09:49.068 }, 00:09:49.068 { 00:09:49.068 "name": "BaseBdev3", 00:09:49.068 "uuid": "a56f3b71-43e7-57ce-a66a-201bf913a0a6", 00:09:49.068 "is_configured": true, 00:09:49.068 "data_offset": 2048, 00:09:49.068 "data_size": 63488 00:09:49.068 } 00:09:49.068 ] 00:09:49.068 }' 00:09:49.068 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.068 03:58:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.637 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.637 03:58:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.637 [2024-11-18 03:58:46.086620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:50.576 03:58:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:50.576 03:58:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.576 03:58:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.576 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.576 "name": "raid_bdev1", 00:09:50.576 "uuid": "a157aa3a-f6d5-464c-99ab-f1730f1256b3", 00:09:50.576 "strip_size_kb": 64, 00:09:50.576 "state": "online", 00:09:50.576 "raid_level": "concat", 00:09:50.576 "superblock": true, 00:09:50.576 "num_base_bdevs": 3, 00:09:50.576 "num_base_bdevs_discovered": 3, 00:09:50.576 "num_base_bdevs_operational": 3, 00:09:50.576 "base_bdevs_list": [ 00:09:50.576 { 00:09:50.576 "name": "BaseBdev1", 00:09:50.576 "uuid": "d9a7db71-7e9d-58f7-8661-4b152f8dea5a", 00:09:50.576 "is_configured": true, 00:09:50.576 "data_offset": 2048, 00:09:50.576 "data_size": 63488 00:09:50.576 }, 00:09:50.576 { 00:09:50.576 "name": "BaseBdev2", 00:09:50.576 "uuid": "e3198db9-58fb-5484-84d5-28eb86654e61", 00:09:50.576 "is_configured": true, 00:09:50.576 "data_offset": 2048, 00:09:50.576 "data_size": 63488 00:09:50.576 }, 00:09:50.576 { 00:09:50.576 "name": "BaseBdev3", 00:09:50.576 "uuid": "a56f3b71-43e7-57ce-a66a-201bf913a0a6", 00:09:50.576 "is_configured": true, 00:09:50.576 "data_offset": 2048, 00:09:50.576 "data_size": 63488 00:09:50.576 } 00:09:50.576 ] 00:09:50.576 }' 00:09:50.577 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.577 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.836 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.836 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.836 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.836 [2024-11-18 03:58:47.463341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.836 [2024-11-18 03:58:47.463484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.836 [2024-11-18 03:58:47.466093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.836 [2024-11-18 03:58:47.466185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.836 [2024-11-18 03:58:47.466247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.836 [2024-11-18 03:58:47.466295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:50.836 { 00:09:50.836 "results": [ 00:09:50.836 { 00:09:50.836 "job": "raid_bdev1", 00:09:50.836 "core_mask": "0x1", 00:09:50.836 "workload": "randrw", 00:09:50.836 "percentage": 50, 00:09:50.836 "status": "finished", 00:09:50.836 "queue_depth": 1, 00:09:50.836 "io_size": 131072, 00:09:50.836 "runtime": 1.377322, 00:09:50.836 "iops": 13661.293437554907, 00:09:50.836 "mibps": 1707.6616796943633, 00:09:50.836 "io_failed": 1, 00:09:50.836 "io_timeout": 0, 00:09:50.836 "avg_latency_us": 103.2460382730194, 00:09:50.836 "min_latency_us": 25.6, 00:09:50.836 "max_latency_us": 1337.907423580786 00:09:50.836 } 00:09:50.836 ], 00:09:50.836 "core_count": 1 00:09:50.836 } 00:09:50.836 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.836 03:58:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67230 00:09:50.836 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67230 ']' 00:09:50.836 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67230 00:09:50.836 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:51.095 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.095 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67230 00:09:51.095 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.095 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.095 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67230' 00:09:51.095 killing process with pid 67230 00:09:51.095 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67230 00:09:51.096 [2024-11-18 03:58:47.510840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.096 03:58:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67230 00:09:51.355 [2024-11-18 03:58:47.772365] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.738 03:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cqmPFlSXtD 00:09:52.738 03:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.738 03:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.738 03:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:52.738 03:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:52.738 03:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.738 03:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:52.738 ************************************ 00:09:52.738 END TEST raid_write_error_test 00:09:52.738 ************************************ 00:09:52.738 03:58:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:52.738 00:09:52.738 real 0m4.665s 00:09:52.738 user 0m5.398s 00:09:52.738 sys 0m0.638s 00:09:52.738 03:58:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.738 03:58:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.738 03:58:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:52.738 03:58:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:52.738 03:58:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:52.738 03:58:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.738 03:58:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.738 ************************************ 00:09:52.738 START TEST raid_state_function_test 00:09:52.738 ************************************ 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:52.738 Process raid pid: 67374 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67374 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67374' 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67374 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67374 ']' 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.738 03:58:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.738 [2024-11-18 03:58:49.231081] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:52.738 [2024-11-18 03:58:49.231284] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.001 [2024-11-18 03:58:49.406258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.001 [2024-11-18 03:58:49.547311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.260 [2024-11-18 03:58:49.781445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.260 [2024-11-18 03:58:49.781601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.521 [2024-11-18 03:58:50.058628] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.521 [2024-11-18 03:58:50.058701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.521 [2024-11-18 03:58:50.058712] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.521 [2024-11-18 03:58:50.058722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.521 [2024-11-18 03:58:50.058728] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.521 [2024-11-18 03:58:50.058737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.521 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.521 "name": "Existed_Raid", 00:09:53.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.522 "strip_size_kb": 0, 00:09:53.522 "state": "configuring", 00:09:53.522 "raid_level": "raid1", 00:09:53.522 "superblock": false, 00:09:53.522 "num_base_bdevs": 3, 00:09:53.522 "num_base_bdevs_discovered": 0, 00:09:53.522 "num_base_bdevs_operational": 3, 00:09:53.522 "base_bdevs_list": [ 00:09:53.522 { 00:09:53.522 "name": "BaseBdev1", 00:09:53.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.522 "is_configured": false, 00:09:53.522 "data_offset": 0, 00:09:53.522 "data_size": 0 00:09:53.522 }, 00:09:53.522 { 00:09:53.522 "name": "BaseBdev2", 00:09:53.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.522 "is_configured": false, 00:09:53.522 "data_offset": 0, 00:09:53.522 "data_size": 0 00:09:53.522 }, 00:09:53.522 { 00:09:53.522 "name": "BaseBdev3", 00:09:53.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.522 "is_configured": false, 00:09:53.522 "data_offset": 0, 00:09:53.522 "data_size": 0 00:09:53.522 } 00:09:53.522 ] 00:09:53.522 }' 00:09:53.522 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.522 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.117 [2024-11-18 03:58:50.513868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.117 [2024-11-18 03:58:50.514002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.117 [2024-11-18 03:58:50.525764] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.117 [2024-11-18 03:58:50.525865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.117 [2024-11-18 03:58:50.525879] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.117 [2024-11-18 03:58:50.525890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.117 [2024-11-18 03:58:50.525896] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.117 [2024-11-18 03:58:50.525905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.117 [2024-11-18 03:58:50.580564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.117 BaseBdev1 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.117 [ 00:09:54.117 { 00:09:54.117 "name": "BaseBdev1", 00:09:54.117 "aliases": [ 00:09:54.117 "7cd24a68-4ecf-4f69-aef0-4f1faa20c5d9" 00:09:54.117 ], 00:09:54.117 "product_name": "Malloc disk", 00:09:54.117 "block_size": 512, 00:09:54.117 "num_blocks": 65536, 00:09:54.117 "uuid": "7cd24a68-4ecf-4f69-aef0-4f1faa20c5d9", 00:09:54.117 "assigned_rate_limits": { 00:09:54.117 "rw_ios_per_sec": 0, 00:09:54.117 "rw_mbytes_per_sec": 0, 00:09:54.117 "r_mbytes_per_sec": 0, 00:09:54.117 "w_mbytes_per_sec": 0 00:09:54.117 }, 00:09:54.117 "claimed": true, 00:09:54.117 "claim_type": "exclusive_write", 00:09:54.117 "zoned": false, 00:09:54.117 "supported_io_types": { 00:09:54.117 "read": true, 00:09:54.117 "write": true, 00:09:54.117 "unmap": true, 00:09:54.117 "flush": true, 00:09:54.117 "reset": true, 00:09:54.117 "nvme_admin": false, 00:09:54.117 "nvme_io": false, 00:09:54.117 "nvme_io_md": false, 00:09:54.117 "write_zeroes": true, 00:09:54.117 "zcopy": true, 00:09:54.117 "get_zone_info": false, 00:09:54.117 "zone_management": false, 00:09:54.117 "zone_append": false, 00:09:54.117 "compare": false, 00:09:54.117 "compare_and_write": false, 00:09:54.117 "abort": true, 00:09:54.117 "seek_hole": false, 00:09:54.117 "seek_data": false, 00:09:54.117 "copy": true, 00:09:54.117 "nvme_iov_md": false 00:09:54.117 }, 00:09:54.117 "memory_domains": [ 00:09:54.117 { 00:09:54.117 "dma_device_id": "system", 00:09:54.117 "dma_device_type": 1 00:09:54.117 }, 00:09:54.117 { 00:09:54.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.117 "dma_device_type": 2 00:09:54.117 } 00:09:54.117 ], 00:09:54.117 "driver_specific": {} 00:09:54.117 } 00:09:54.117 ] 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.117 "name": "Existed_Raid", 00:09:54.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.117 "strip_size_kb": 0, 00:09:54.117 "state": "configuring", 00:09:54.117 "raid_level": "raid1", 00:09:54.117 "superblock": false, 00:09:54.117 "num_base_bdevs": 3, 00:09:54.117 "num_base_bdevs_discovered": 1, 00:09:54.117 "num_base_bdevs_operational": 3, 00:09:54.117 "base_bdevs_list": [ 00:09:54.117 { 00:09:54.117 "name": "BaseBdev1", 00:09:54.117 "uuid": "7cd24a68-4ecf-4f69-aef0-4f1faa20c5d9", 00:09:54.117 "is_configured": true, 00:09:54.117 "data_offset": 0, 00:09:54.117 "data_size": 65536 00:09:54.117 }, 00:09:54.117 { 00:09:54.117 "name": "BaseBdev2", 00:09:54.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.117 "is_configured": false, 00:09:54.117 "data_offset": 0, 00:09:54.117 "data_size": 0 00:09:54.117 }, 00:09:54.117 { 00:09:54.117 "name": "BaseBdev3", 00:09:54.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.117 "is_configured": false, 00:09:54.117 "data_offset": 0, 00:09:54.117 "data_size": 0 00:09:54.117 } 00:09:54.117 ] 00:09:54.117 }' 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.117 03:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.686 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.686 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.686 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.686 [2024-11-18 03:58:51.063808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.687 [2024-11-18 03:58:51.063972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.687 [2024-11-18 03:58:51.075803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.687 [2024-11-18 03:58:51.077912] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.687 [2024-11-18 03:58:51.077954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.687 [2024-11-18 03:58:51.077965] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.687 [2024-11-18 03:58:51.077973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.687 "name": "Existed_Raid", 00:09:54.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.687 "strip_size_kb": 0, 00:09:54.687 "state": "configuring", 00:09:54.687 "raid_level": "raid1", 00:09:54.687 "superblock": false, 00:09:54.687 "num_base_bdevs": 3, 00:09:54.687 "num_base_bdevs_discovered": 1, 00:09:54.687 "num_base_bdevs_operational": 3, 00:09:54.687 "base_bdevs_list": [ 00:09:54.687 { 00:09:54.687 "name": "BaseBdev1", 00:09:54.687 "uuid": "7cd24a68-4ecf-4f69-aef0-4f1faa20c5d9", 00:09:54.687 "is_configured": true, 00:09:54.687 "data_offset": 0, 00:09:54.687 "data_size": 65536 00:09:54.687 }, 00:09:54.687 { 00:09:54.687 "name": "BaseBdev2", 00:09:54.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.687 "is_configured": false, 00:09:54.687 "data_offset": 0, 00:09:54.687 "data_size": 0 00:09:54.687 }, 00:09:54.687 { 00:09:54.687 "name": "BaseBdev3", 00:09:54.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.687 "is_configured": false, 00:09:54.687 "data_offset": 0, 00:09:54.687 "data_size": 0 00:09:54.687 } 00:09:54.687 ] 00:09:54.687 }' 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.687 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.948 [2024-11-18 03:58:51.573618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.948 BaseBdev2 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.948 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.208 [ 00:09:55.208 { 00:09:55.208 "name": "BaseBdev2", 00:09:55.208 "aliases": [ 00:09:55.208 "b47b98fa-36da-4ead-95fb-75b6061f6157" 00:09:55.208 ], 00:09:55.208 "product_name": "Malloc disk", 00:09:55.208 "block_size": 512, 00:09:55.208 "num_blocks": 65536, 00:09:55.208 "uuid": "b47b98fa-36da-4ead-95fb-75b6061f6157", 00:09:55.208 "assigned_rate_limits": { 00:09:55.208 "rw_ios_per_sec": 0, 00:09:55.208 "rw_mbytes_per_sec": 0, 00:09:55.208 "r_mbytes_per_sec": 0, 00:09:55.208 "w_mbytes_per_sec": 0 00:09:55.208 }, 00:09:55.208 "claimed": true, 00:09:55.208 "claim_type": "exclusive_write", 00:09:55.208 "zoned": false, 00:09:55.208 "supported_io_types": { 00:09:55.208 "read": true, 00:09:55.208 "write": true, 00:09:55.208 "unmap": true, 00:09:55.208 "flush": true, 00:09:55.208 "reset": true, 00:09:55.208 "nvme_admin": false, 00:09:55.208 "nvme_io": false, 00:09:55.208 "nvme_io_md": false, 00:09:55.208 "write_zeroes": true, 00:09:55.208 "zcopy": true, 00:09:55.208 "get_zone_info": false, 00:09:55.208 "zone_management": false, 00:09:55.208 "zone_append": false, 00:09:55.208 "compare": false, 00:09:55.208 "compare_and_write": false, 00:09:55.208 "abort": true, 00:09:55.208 "seek_hole": false, 00:09:55.208 "seek_data": false, 00:09:55.208 "copy": true, 00:09:55.208 "nvme_iov_md": false 00:09:55.208 }, 00:09:55.209 "memory_domains": [ 00:09:55.209 { 00:09:55.209 "dma_device_id": "system", 00:09:55.209 "dma_device_type": 1 00:09:55.209 }, 00:09:55.209 { 00:09:55.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.209 "dma_device_type": 2 00:09:55.209 } 00:09:55.209 ], 00:09:55.209 "driver_specific": {} 00:09:55.209 } 00:09:55.209 ] 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.209 "name": "Existed_Raid", 00:09:55.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.209 "strip_size_kb": 0, 00:09:55.209 "state": "configuring", 00:09:55.209 "raid_level": "raid1", 00:09:55.209 "superblock": false, 00:09:55.209 "num_base_bdevs": 3, 00:09:55.209 "num_base_bdevs_discovered": 2, 00:09:55.209 "num_base_bdevs_operational": 3, 00:09:55.209 "base_bdevs_list": [ 00:09:55.209 { 00:09:55.209 "name": "BaseBdev1", 00:09:55.209 "uuid": "7cd24a68-4ecf-4f69-aef0-4f1faa20c5d9", 00:09:55.209 "is_configured": true, 00:09:55.209 "data_offset": 0, 00:09:55.209 "data_size": 65536 00:09:55.209 }, 00:09:55.209 { 00:09:55.209 "name": "BaseBdev2", 00:09:55.209 "uuid": "b47b98fa-36da-4ead-95fb-75b6061f6157", 00:09:55.209 "is_configured": true, 00:09:55.209 "data_offset": 0, 00:09:55.209 "data_size": 65536 00:09:55.209 }, 00:09:55.209 { 00:09:55.209 "name": "BaseBdev3", 00:09:55.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.209 "is_configured": false, 00:09:55.209 "data_offset": 0, 00:09:55.209 "data_size": 0 00:09:55.209 } 00:09:55.209 ] 00:09:55.209 }' 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.209 03:58:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.469 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:55.469 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.469 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.730 [2024-11-18 03:58:52.132991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.730 [2024-11-18 03:58:52.133130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:55.730 [2024-11-18 03:58:52.133161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:55.730 [2024-11-18 03:58:52.133479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:55.730 [2024-11-18 03:58:52.133711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:55.730 [2024-11-18 03:58:52.133748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:55.730 [2024-11-18 03:58:52.134074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.730 BaseBdev3 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.730 [ 00:09:55.730 { 00:09:55.730 "name": "BaseBdev3", 00:09:55.730 "aliases": [ 00:09:55.730 "0fcaef18-7bec-40e2-b5f8-e86ba3f3f278" 00:09:55.730 ], 00:09:55.730 "product_name": "Malloc disk", 00:09:55.730 "block_size": 512, 00:09:55.730 "num_blocks": 65536, 00:09:55.730 "uuid": "0fcaef18-7bec-40e2-b5f8-e86ba3f3f278", 00:09:55.730 "assigned_rate_limits": { 00:09:55.730 "rw_ios_per_sec": 0, 00:09:55.730 "rw_mbytes_per_sec": 0, 00:09:55.730 "r_mbytes_per_sec": 0, 00:09:55.730 "w_mbytes_per_sec": 0 00:09:55.730 }, 00:09:55.730 "claimed": true, 00:09:55.730 "claim_type": "exclusive_write", 00:09:55.730 "zoned": false, 00:09:55.730 "supported_io_types": { 00:09:55.730 "read": true, 00:09:55.730 "write": true, 00:09:55.730 "unmap": true, 00:09:55.730 "flush": true, 00:09:55.730 "reset": true, 00:09:55.730 "nvme_admin": false, 00:09:55.730 "nvme_io": false, 00:09:55.730 "nvme_io_md": false, 00:09:55.730 "write_zeroes": true, 00:09:55.730 "zcopy": true, 00:09:55.730 "get_zone_info": false, 00:09:55.730 "zone_management": false, 00:09:55.730 "zone_append": false, 00:09:55.730 "compare": false, 00:09:55.730 "compare_and_write": false, 00:09:55.730 "abort": true, 00:09:55.730 "seek_hole": false, 00:09:55.730 "seek_data": false, 00:09:55.730 "copy": true, 00:09:55.730 "nvme_iov_md": false 00:09:55.730 }, 00:09:55.730 "memory_domains": [ 00:09:55.730 { 00:09:55.730 "dma_device_id": "system", 00:09:55.730 "dma_device_type": 1 00:09:55.730 }, 00:09:55.730 { 00:09:55.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.730 "dma_device_type": 2 00:09:55.730 } 00:09:55.730 ], 00:09:55.730 "driver_specific": {} 00:09:55.730 } 00:09:55.730 ] 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.730 "name": "Existed_Raid", 00:09:55.730 "uuid": "e0ec429f-6767-4926-8333-03629e32d5a5", 00:09:55.730 "strip_size_kb": 0, 00:09:55.730 "state": "online", 00:09:55.730 "raid_level": "raid1", 00:09:55.730 "superblock": false, 00:09:55.730 "num_base_bdevs": 3, 00:09:55.730 "num_base_bdevs_discovered": 3, 00:09:55.730 "num_base_bdevs_operational": 3, 00:09:55.730 "base_bdevs_list": [ 00:09:55.730 { 00:09:55.730 "name": "BaseBdev1", 00:09:55.730 "uuid": "7cd24a68-4ecf-4f69-aef0-4f1faa20c5d9", 00:09:55.730 "is_configured": true, 00:09:55.730 "data_offset": 0, 00:09:55.730 "data_size": 65536 00:09:55.730 }, 00:09:55.730 { 00:09:55.730 "name": "BaseBdev2", 00:09:55.730 "uuid": "b47b98fa-36da-4ead-95fb-75b6061f6157", 00:09:55.730 "is_configured": true, 00:09:55.730 "data_offset": 0, 00:09:55.730 "data_size": 65536 00:09:55.730 }, 00:09:55.730 { 00:09:55.730 "name": "BaseBdev3", 00:09:55.730 "uuid": "0fcaef18-7bec-40e2-b5f8-e86ba3f3f278", 00:09:55.730 "is_configured": true, 00:09:55.730 "data_offset": 0, 00:09:55.730 "data_size": 65536 00:09:55.730 } 00:09:55.730 ] 00:09:55.730 }' 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.730 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.991 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:55.991 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:55.991 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.991 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.991 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.991 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.991 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.991 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:55.991 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.991 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.991 [2024-11-18 03:58:52.600676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.991 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.251 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.251 "name": "Existed_Raid", 00:09:56.251 "aliases": [ 00:09:56.251 "e0ec429f-6767-4926-8333-03629e32d5a5" 00:09:56.251 ], 00:09:56.251 "product_name": "Raid Volume", 00:09:56.251 "block_size": 512, 00:09:56.251 "num_blocks": 65536, 00:09:56.251 "uuid": "e0ec429f-6767-4926-8333-03629e32d5a5", 00:09:56.251 "assigned_rate_limits": { 00:09:56.251 "rw_ios_per_sec": 0, 00:09:56.251 "rw_mbytes_per_sec": 0, 00:09:56.251 "r_mbytes_per_sec": 0, 00:09:56.251 "w_mbytes_per_sec": 0 00:09:56.251 }, 00:09:56.251 "claimed": false, 00:09:56.251 "zoned": false, 00:09:56.251 "supported_io_types": { 00:09:56.251 "read": true, 00:09:56.251 "write": true, 00:09:56.251 "unmap": false, 00:09:56.251 "flush": false, 00:09:56.251 "reset": true, 00:09:56.251 "nvme_admin": false, 00:09:56.251 "nvme_io": false, 00:09:56.251 "nvme_io_md": false, 00:09:56.251 "write_zeroes": true, 00:09:56.251 "zcopy": false, 00:09:56.251 "get_zone_info": false, 00:09:56.251 "zone_management": false, 00:09:56.251 "zone_append": false, 00:09:56.251 "compare": false, 00:09:56.251 "compare_and_write": false, 00:09:56.251 "abort": false, 00:09:56.251 "seek_hole": false, 00:09:56.251 "seek_data": false, 00:09:56.251 "copy": false, 00:09:56.251 "nvme_iov_md": false 00:09:56.251 }, 00:09:56.251 "memory_domains": [ 00:09:56.251 { 00:09:56.251 "dma_device_id": "system", 00:09:56.251 "dma_device_type": 1 00:09:56.251 }, 00:09:56.251 { 00:09:56.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.251 "dma_device_type": 2 00:09:56.251 }, 00:09:56.251 { 00:09:56.251 "dma_device_id": "system", 00:09:56.251 "dma_device_type": 1 00:09:56.251 }, 00:09:56.251 { 00:09:56.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.251 "dma_device_type": 2 00:09:56.251 }, 00:09:56.251 { 00:09:56.251 "dma_device_id": "system", 00:09:56.251 "dma_device_type": 1 00:09:56.251 }, 00:09:56.251 { 00:09:56.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.251 "dma_device_type": 2 00:09:56.251 } 00:09:56.251 ], 00:09:56.251 "driver_specific": { 00:09:56.251 "raid": { 00:09:56.251 "uuid": "e0ec429f-6767-4926-8333-03629e32d5a5", 00:09:56.251 "strip_size_kb": 0, 00:09:56.251 "state": "online", 00:09:56.251 "raid_level": "raid1", 00:09:56.251 "superblock": false, 00:09:56.251 "num_base_bdevs": 3, 00:09:56.251 "num_base_bdevs_discovered": 3, 00:09:56.251 "num_base_bdevs_operational": 3, 00:09:56.251 "base_bdevs_list": [ 00:09:56.251 { 00:09:56.251 "name": "BaseBdev1", 00:09:56.251 "uuid": "7cd24a68-4ecf-4f69-aef0-4f1faa20c5d9", 00:09:56.251 "is_configured": true, 00:09:56.251 "data_offset": 0, 00:09:56.251 "data_size": 65536 00:09:56.251 }, 00:09:56.251 { 00:09:56.251 "name": "BaseBdev2", 00:09:56.252 "uuid": "b47b98fa-36da-4ead-95fb-75b6061f6157", 00:09:56.252 "is_configured": true, 00:09:56.252 "data_offset": 0, 00:09:56.252 "data_size": 65536 00:09:56.252 }, 00:09:56.252 { 00:09:56.252 "name": "BaseBdev3", 00:09:56.252 "uuid": "0fcaef18-7bec-40e2-b5f8-e86ba3f3f278", 00:09:56.252 "is_configured": true, 00:09:56.252 "data_offset": 0, 00:09:56.252 "data_size": 65536 00:09:56.252 } 00:09:56.252 ] 00:09:56.252 } 00:09:56.252 } 00:09:56.252 }' 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:56.252 BaseBdev2 00:09:56.252 BaseBdev3' 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.252 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.252 [2024-11-18 03:58:52.871938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.512 03:58:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.512 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.512 "name": "Existed_Raid", 00:09:56.512 "uuid": "e0ec429f-6767-4926-8333-03629e32d5a5", 00:09:56.512 "strip_size_kb": 0, 00:09:56.512 "state": "online", 00:09:56.512 "raid_level": "raid1", 00:09:56.512 "superblock": false, 00:09:56.512 "num_base_bdevs": 3, 00:09:56.512 "num_base_bdevs_discovered": 2, 00:09:56.512 "num_base_bdevs_operational": 2, 00:09:56.512 "base_bdevs_list": [ 00:09:56.512 { 00:09:56.512 "name": null, 00:09:56.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.512 "is_configured": false, 00:09:56.512 "data_offset": 0, 00:09:56.512 "data_size": 65536 00:09:56.512 }, 00:09:56.512 { 00:09:56.512 "name": "BaseBdev2", 00:09:56.512 "uuid": "b47b98fa-36da-4ead-95fb-75b6061f6157", 00:09:56.512 "is_configured": true, 00:09:56.512 "data_offset": 0, 00:09:56.512 "data_size": 65536 00:09:56.512 }, 00:09:56.512 { 00:09:56.512 "name": "BaseBdev3", 00:09:56.512 "uuid": "0fcaef18-7bec-40e2-b5f8-e86ba3f3f278", 00:09:56.512 "is_configured": true, 00:09:56.512 "data_offset": 0, 00:09:56.512 "data_size": 65536 00:09:56.512 } 00:09:56.512 ] 00:09:56.512 }' 00:09:56.512 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.512 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.771 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:56.771 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.030 [2024-11-18 03:58:53.464457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.030 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.030 [2024-11-18 03:58:53.639524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:57.030 [2024-11-18 03:58:53.639751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.289 [2024-11-18 03:58:53.768481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.289 [2024-11-18 03:58:53.768638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.289 [2024-11-18 03:58:53.768692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.289 BaseBdev2 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.289 [ 00:09:57.289 { 00:09:57.289 "name": "BaseBdev2", 00:09:57.289 "aliases": [ 00:09:57.289 "622b1a3f-acbc-4cf9-ae45-399cfa061597" 00:09:57.289 ], 00:09:57.289 "product_name": "Malloc disk", 00:09:57.289 "block_size": 512, 00:09:57.289 "num_blocks": 65536, 00:09:57.289 "uuid": "622b1a3f-acbc-4cf9-ae45-399cfa061597", 00:09:57.289 "assigned_rate_limits": { 00:09:57.289 "rw_ios_per_sec": 0, 00:09:57.289 "rw_mbytes_per_sec": 0, 00:09:57.289 "r_mbytes_per_sec": 0, 00:09:57.289 "w_mbytes_per_sec": 0 00:09:57.289 }, 00:09:57.289 "claimed": false, 00:09:57.289 "zoned": false, 00:09:57.289 "supported_io_types": { 00:09:57.289 "read": true, 00:09:57.289 "write": true, 00:09:57.289 "unmap": true, 00:09:57.289 "flush": true, 00:09:57.289 "reset": true, 00:09:57.289 "nvme_admin": false, 00:09:57.289 "nvme_io": false, 00:09:57.289 "nvme_io_md": false, 00:09:57.289 "write_zeroes": true, 00:09:57.289 "zcopy": true, 00:09:57.289 "get_zone_info": false, 00:09:57.289 "zone_management": false, 00:09:57.289 "zone_append": false, 00:09:57.289 "compare": false, 00:09:57.289 "compare_and_write": false, 00:09:57.289 "abort": true, 00:09:57.289 "seek_hole": false, 00:09:57.289 "seek_data": false, 00:09:57.289 "copy": true, 00:09:57.289 "nvme_iov_md": false 00:09:57.289 }, 00:09:57.289 "memory_domains": [ 00:09:57.289 { 00:09:57.289 "dma_device_id": "system", 00:09:57.289 "dma_device_type": 1 00:09:57.289 }, 00:09:57.289 { 00:09:57.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.289 "dma_device_type": 2 00:09:57.289 } 00:09:57.289 ], 00:09:57.289 "driver_specific": {} 00:09:57.289 } 00:09:57.289 ] 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.289 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.548 BaseBdev3 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.548 03:58:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.548 [ 00:09:57.548 { 00:09:57.548 "name": "BaseBdev3", 00:09:57.548 "aliases": [ 00:09:57.548 "dee4f0e5-1e1f-4599-bae1-170f51692dc9" 00:09:57.548 ], 00:09:57.548 "product_name": "Malloc disk", 00:09:57.548 "block_size": 512, 00:09:57.548 "num_blocks": 65536, 00:09:57.548 "uuid": "dee4f0e5-1e1f-4599-bae1-170f51692dc9", 00:09:57.548 "assigned_rate_limits": { 00:09:57.548 "rw_ios_per_sec": 0, 00:09:57.548 "rw_mbytes_per_sec": 0, 00:09:57.548 "r_mbytes_per_sec": 0, 00:09:57.548 "w_mbytes_per_sec": 0 00:09:57.548 }, 00:09:57.548 "claimed": false, 00:09:57.548 "zoned": false, 00:09:57.548 "supported_io_types": { 00:09:57.548 "read": true, 00:09:57.548 "write": true, 00:09:57.548 "unmap": true, 00:09:57.548 "flush": true, 00:09:57.548 "reset": true, 00:09:57.548 "nvme_admin": false, 00:09:57.548 "nvme_io": false, 00:09:57.548 "nvme_io_md": false, 00:09:57.548 "write_zeroes": true, 00:09:57.548 "zcopy": true, 00:09:57.548 "get_zone_info": false, 00:09:57.548 "zone_management": false, 00:09:57.548 "zone_append": false, 00:09:57.548 "compare": false, 00:09:57.548 "compare_and_write": false, 00:09:57.548 "abort": true, 00:09:57.548 "seek_hole": false, 00:09:57.548 "seek_data": false, 00:09:57.548 "copy": true, 00:09:57.548 "nvme_iov_md": false 00:09:57.548 }, 00:09:57.548 "memory_domains": [ 00:09:57.548 { 00:09:57.548 "dma_device_id": "system", 00:09:57.548 "dma_device_type": 1 00:09:57.548 }, 00:09:57.548 { 00:09:57.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.548 "dma_device_type": 2 00:09:57.548 } 00:09:57.548 ], 00:09:57.548 "driver_specific": {} 00:09:57.548 } 00:09:57.548 ] 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.548 [2024-11-18 03:58:54.014008] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.548 [2024-11-18 03:58:54.014152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.548 [2024-11-18 03:58:54.014202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.548 [2024-11-18 03:58:54.016759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.548 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.548 "name": "Existed_Raid", 00:09:57.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.548 "strip_size_kb": 0, 00:09:57.548 "state": "configuring", 00:09:57.548 "raid_level": "raid1", 00:09:57.548 "superblock": false, 00:09:57.548 "num_base_bdevs": 3, 00:09:57.548 "num_base_bdevs_discovered": 2, 00:09:57.548 "num_base_bdevs_operational": 3, 00:09:57.548 "base_bdevs_list": [ 00:09:57.548 { 00:09:57.548 "name": "BaseBdev1", 00:09:57.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.548 "is_configured": false, 00:09:57.548 "data_offset": 0, 00:09:57.548 "data_size": 0 00:09:57.548 }, 00:09:57.548 { 00:09:57.549 "name": "BaseBdev2", 00:09:57.549 "uuid": "622b1a3f-acbc-4cf9-ae45-399cfa061597", 00:09:57.549 "is_configured": true, 00:09:57.549 "data_offset": 0, 00:09:57.549 "data_size": 65536 00:09:57.549 }, 00:09:57.549 { 00:09:57.549 "name": "BaseBdev3", 00:09:57.549 "uuid": "dee4f0e5-1e1f-4599-bae1-170f51692dc9", 00:09:57.549 "is_configured": true, 00:09:57.549 "data_offset": 0, 00:09:57.549 "data_size": 65536 00:09:57.549 } 00:09:57.549 ] 00:09:57.549 }' 00:09:57.549 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.549 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.808 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:57.808 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.068 [2024-11-18 03:58:54.453318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.068 "name": "Existed_Raid", 00:09:58.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.068 "strip_size_kb": 0, 00:09:58.068 "state": "configuring", 00:09:58.068 "raid_level": "raid1", 00:09:58.068 "superblock": false, 00:09:58.068 "num_base_bdevs": 3, 00:09:58.068 "num_base_bdevs_discovered": 1, 00:09:58.068 "num_base_bdevs_operational": 3, 00:09:58.068 "base_bdevs_list": [ 00:09:58.068 { 00:09:58.068 "name": "BaseBdev1", 00:09:58.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.068 "is_configured": false, 00:09:58.068 "data_offset": 0, 00:09:58.068 "data_size": 0 00:09:58.068 }, 00:09:58.068 { 00:09:58.068 "name": null, 00:09:58.068 "uuid": "622b1a3f-acbc-4cf9-ae45-399cfa061597", 00:09:58.068 "is_configured": false, 00:09:58.068 "data_offset": 0, 00:09:58.068 "data_size": 65536 00:09:58.068 }, 00:09:58.068 { 00:09:58.068 "name": "BaseBdev3", 00:09:58.068 "uuid": "dee4f0e5-1e1f-4599-bae1-170f51692dc9", 00:09:58.068 "is_configured": true, 00:09:58.068 "data_offset": 0, 00:09:58.068 "data_size": 65536 00:09:58.068 } 00:09:58.068 ] 00:09:58.068 }' 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.068 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.327 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.327 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.327 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.327 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.327 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.327 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:58.327 03:58:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.327 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.327 03:58:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.586 [2024-11-18 03:58:55.004171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.586 BaseBdev1 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.586 [ 00:09:58.586 { 00:09:58.586 "name": "BaseBdev1", 00:09:58.586 "aliases": [ 00:09:58.586 "ebd11edc-9aa9-46fa-bab3-defad46d77f3" 00:09:58.586 ], 00:09:58.586 "product_name": "Malloc disk", 00:09:58.586 "block_size": 512, 00:09:58.586 "num_blocks": 65536, 00:09:58.586 "uuid": "ebd11edc-9aa9-46fa-bab3-defad46d77f3", 00:09:58.586 "assigned_rate_limits": { 00:09:58.586 "rw_ios_per_sec": 0, 00:09:58.586 "rw_mbytes_per_sec": 0, 00:09:58.586 "r_mbytes_per_sec": 0, 00:09:58.586 "w_mbytes_per_sec": 0 00:09:58.586 }, 00:09:58.586 "claimed": true, 00:09:58.586 "claim_type": "exclusive_write", 00:09:58.586 "zoned": false, 00:09:58.586 "supported_io_types": { 00:09:58.586 "read": true, 00:09:58.586 "write": true, 00:09:58.586 "unmap": true, 00:09:58.586 "flush": true, 00:09:58.586 "reset": true, 00:09:58.586 "nvme_admin": false, 00:09:58.586 "nvme_io": false, 00:09:58.586 "nvme_io_md": false, 00:09:58.586 "write_zeroes": true, 00:09:58.586 "zcopy": true, 00:09:58.586 "get_zone_info": false, 00:09:58.586 "zone_management": false, 00:09:58.586 "zone_append": false, 00:09:58.586 "compare": false, 00:09:58.586 "compare_and_write": false, 00:09:58.586 "abort": true, 00:09:58.586 "seek_hole": false, 00:09:58.586 "seek_data": false, 00:09:58.586 "copy": true, 00:09:58.586 "nvme_iov_md": false 00:09:58.586 }, 00:09:58.586 "memory_domains": [ 00:09:58.586 { 00:09:58.586 "dma_device_id": "system", 00:09:58.586 "dma_device_type": 1 00:09:58.586 }, 00:09:58.586 { 00:09:58.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.586 "dma_device_type": 2 00:09:58.586 } 00:09:58.586 ], 00:09:58.586 "driver_specific": {} 00:09:58.586 } 00:09:58.586 ] 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.586 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.586 "name": "Existed_Raid", 00:09:58.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.586 "strip_size_kb": 0, 00:09:58.586 "state": "configuring", 00:09:58.586 "raid_level": "raid1", 00:09:58.586 "superblock": false, 00:09:58.586 "num_base_bdevs": 3, 00:09:58.586 "num_base_bdevs_discovered": 2, 00:09:58.586 "num_base_bdevs_operational": 3, 00:09:58.586 "base_bdevs_list": [ 00:09:58.586 { 00:09:58.586 "name": "BaseBdev1", 00:09:58.586 "uuid": "ebd11edc-9aa9-46fa-bab3-defad46d77f3", 00:09:58.586 "is_configured": true, 00:09:58.586 "data_offset": 0, 00:09:58.586 "data_size": 65536 00:09:58.586 }, 00:09:58.586 { 00:09:58.586 "name": null, 00:09:58.586 "uuid": "622b1a3f-acbc-4cf9-ae45-399cfa061597", 00:09:58.586 "is_configured": false, 00:09:58.586 "data_offset": 0, 00:09:58.586 "data_size": 65536 00:09:58.586 }, 00:09:58.586 { 00:09:58.586 "name": "BaseBdev3", 00:09:58.586 "uuid": "dee4f0e5-1e1f-4599-bae1-170f51692dc9", 00:09:58.586 "is_configured": true, 00:09:58.586 "data_offset": 0, 00:09:58.586 "data_size": 65536 00:09:58.586 } 00:09:58.586 ] 00:09:58.586 }' 00:09:58.587 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.587 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.846 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.846 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.846 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.846 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:58.846 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.105 [2024-11-18 03:58:55.519596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.105 "name": "Existed_Raid", 00:09:59.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.105 "strip_size_kb": 0, 00:09:59.105 "state": "configuring", 00:09:59.105 "raid_level": "raid1", 00:09:59.105 "superblock": false, 00:09:59.105 "num_base_bdevs": 3, 00:09:59.105 "num_base_bdevs_discovered": 1, 00:09:59.105 "num_base_bdevs_operational": 3, 00:09:59.105 "base_bdevs_list": [ 00:09:59.105 { 00:09:59.105 "name": "BaseBdev1", 00:09:59.105 "uuid": "ebd11edc-9aa9-46fa-bab3-defad46d77f3", 00:09:59.105 "is_configured": true, 00:09:59.105 "data_offset": 0, 00:09:59.105 "data_size": 65536 00:09:59.105 }, 00:09:59.105 { 00:09:59.105 "name": null, 00:09:59.105 "uuid": "622b1a3f-acbc-4cf9-ae45-399cfa061597", 00:09:59.105 "is_configured": false, 00:09:59.105 "data_offset": 0, 00:09:59.105 "data_size": 65536 00:09:59.105 }, 00:09:59.105 { 00:09:59.105 "name": null, 00:09:59.105 "uuid": "dee4f0e5-1e1f-4599-bae1-170f51692dc9", 00:09:59.105 "is_configured": false, 00:09:59.105 "data_offset": 0, 00:09:59.105 "data_size": 65536 00:09:59.105 } 00:09:59.105 ] 00:09:59.105 }' 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.105 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.365 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.365 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.365 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.365 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.365 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.365 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:59.365 03:58:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:59.365 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.365 03:58:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.365 [2024-11-18 03:58:55.999063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.365 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.365 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.624 "name": "Existed_Raid", 00:09:59.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.624 "strip_size_kb": 0, 00:09:59.624 "state": "configuring", 00:09:59.624 "raid_level": "raid1", 00:09:59.624 "superblock": false, 00:09:59.624 "num_base_bdevs": 3, 00:09:59.624 "num_base_bdevs_discovered": 2, 00:09:59.624 "num_base_bdevs_operational": 3, 00:09:59.624 "base_bdevs_list": [ 00:09:59.624 { 00:09:59.624 "name": "BaseBdev1", 00:09:59.624 "uuid": "ebd11edc-9aa9-46fa-bab3-defad46d77f3", 00:09:59.624 "is_configured": true, 00:09:59.624 "data_offset": 0, 00:09:59.624 "data_size": 65536 00:09:59.624 }, 00:09:59.624 { 00:09:59.624 "name": null, 00:09:59.624 "uuid": "622b1a3f-acbc-4cf9-ae45-399cfa061597", 00:09:59.624 "is_configured": false, 00:09:59.624 "data_offset": 0, 00:09:59.624 "data_size": 65536 00:09:59.624 }, 00:09:59.624 { 00:09:59.624 "name": "BaseBdev3", 00:09:59.624 "uuid": "dee4f0e5-1e1f-4599-bae1-170f51692dc9", 00:09:59.624 "is_configured": true, 00:09:59.624 "data_offset": 0, 00:09:59.624 "data_size": 65536 00:09:59.624 } 00:09:59.624 ] 00:09:59.624 }' 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.624 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.883 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.883 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.883 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.883 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.883 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.883 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:59.883 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.883 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.883 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.883 [2024-11-18 03:58:56.486470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.141 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.141 "name": "Existed_Raid", 00:10:00.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.141 "strip_size_kb": 0, 00:10:00.141 "state": "configuring", 00:10:00.141 "raid_level": "raid1", 00:10:00.141 "superblock": false, 00:10:00.141 "num_base_bdevs": 3, 00:10:00.141 "num_base_bdevs_discovered": 1, 00:10:00.141 "num_base_bdevs_operational": 3, 00:10:00.141 "base_bdevs_list": [ 00:10:00.141 { 00:10:00.141 "name": null, 00:10:00.141 "uuid": "ebd11edc-9aa9-46fa-bab3-defad46d77f3", 00:10:00.141 "is_configured": false, 00:10:00.141 "data_offset": 0, 00:10:00.141 "data_size": 65536 00:10:00.141 }, 00:10:00.141 { 00:10:00.141 "name": null, 00:10:00.142 "uuid": "622b1a3f-acbc-4cf9-ae45-399cfa061597", 00:10:00.142 "is_configured": false, 00:10:00.142 "data_offset": 0, 00:10:00.142 "data_size": 65536 00:10:00.142 }, 00:10:00.142 { 00:10:00.142 "name": "BaseBdev3", 00:10:00.142 "uuid": "dee4f0e5-1e1f-4599-bae1-170f51692dc9", 00:10:00.142 "is_configured": true, 00:10:00.142 "data_offset": 0, 00:10:00.142 "data_size": 65536 00:10:00.142 } 00:10:00.142 ] 00:10:00.142 }' 00:10:00.142 03:58:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.142 03:58:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.709 [2024-11-18 03:58:57.139632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.709 "name": "Existed_Raid", 00:10:00.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.709 "strip_size_kb": 0, 00:10:00.709 "state": "configuring", 00:10:00.709 "raid_level": "raid1", 00:10:00.709 "superblock": false, 00:10:00.709 "num_base_bdevs": 3, 00:10:00.709 "num_base_bdevs_discovered": 2, 00:10:00.709 "num_base_bdevs_operational": 3, 00:10:00.709 "base_bdevs_list": [ 00:10:00.709 { 00:10:00.709 "name": null, 00:10:00.709 "uuid": "ebd11edc-9aa9-46fa-bab3-defad46d77f3", 00:10:00.709 "is_configured": false, 00:10:00.709 "data_offset": 0, 00:10:00.709 "data_size": 65536 00:10:00.709 }, 00:10:00.709 { 00:10:00.709 "name": "BaseBdev2", 00:10:00.709 "uuid": "622b1a3f-acbc-4cf9-ae45-399cfa061597", 00:10:00.709 "is_configured": true, 00:10:00.709 "data_offset": 0, 00:10:00.709 "data_size": 65536 00:10:00.709 }, 00:10:00.709 { 00:10:00.709 "name": "BaseBdev3", 00:10:00.709 "uuid": "dee4f0e5-1e1f-4599-bae1-170f51692dc9", 00:10:00.709 "is_configured": true, 00:10:00.709 "data_offset": 0, 00:10:00.709 "data_size": 65536 00:10:00.709 } 00:10:00.709 ] 00:10:00.709 }' 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.709 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ebd11edc-9aa9-46fa-bab3-defad46d77f3 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.276 [2024-11-18 03:58:57.767341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:01.276 [2024-11-18 03:58:57.767545] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:01.276 [2024-11-18 03:58:57.767577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:01.276 [2024-11-18 03:58:57.767958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:01.276 [2024-11-18 03:58:57.768218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:01.276 [2024-11-18 03:58:57.768272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:01.276 [2024-11-18 03:58:57.768636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.276 NewBaseBdev 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:01.276 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.277 [ 00:10:01.277 { 00:10:01.277 "name": "NewBaseBdev", 00:10:01.277 "aliases": [ 00:10:01.277 "ebd11edc-9aa9-46fa-bab3-defad46d77f3" 00:10:01.277 ], 00:10:01.277 "product_name": "Malloc disk", 00:10:01.277 "block_size": 512, 00:10:01.277 "num_blocks": 65536, 00:10:01.277 "uuid": "ebd11edc-9aa9-46fa-bab3-defad46d77f3", 00:10:01.277 "assigned_rate_limits": { 00:10:01.277 "rw_ios_per_sec": 0, 00:10:01.277 "rw_mbytes_per_sec": 0, 00:10:01.277 "r_mbytes_per_sec": 0, 00:10:01.277 "w_mbytes_per_sec": 0 00:10:01.277 }, 00:10:01.277 "claimed": true, 00:10:01.277 "claim_type": "exclusive_write", 00:10:01.277 "zoned": false, 00:10:01.277 "supported_io_types": { 00:10:01.277 "read": true, 00:10:01.277 "write": true, 00:10:01.277 "unmap": true, 00:10:01.277 "flush": true, 00:10:01.277 "reset": true, 00:10:01.277 "nvme_admin": false, 00:10:01.277 "nvme_io": false, 00:10:01.277 "nvme_io_md": false, 00:10:01.277 "write_zeroes": true, 00:10:01.277 "zcopy": true, 00:10:01.277 "get_zone_info": false, 00:10:01.277 "zone_management": false, 00:10:01.277 "zone_append": false, 00:10:01.277 "compare": false, 00:10:01.277 "compare_and_write": false, 00:10:01.277 "abort": true, 00:10:01.277 "seek_hole": false, 00:10:01.277 "seek_data": false, 00:10:01.277 "copy": true, 00:10:01.277 "nvme_iov_md": false 00:10:01.277 }, 00:10:01.277 "memory_domains": [ 00:10:01.277 { 00:10:01.277 "dma_device_id": "system", 00:10:01.277 "dma_device_type": 1 00:10:01.277 }, 00:10:01.277 { 00:10:01.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.277 "dma_device_type": 2 00:10:01.277 } 00:10:01.277 ], 00:10:01.277 "driver_specific": {} 00:10:01.277 } 00:10:01.277 ] 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.277 "name": "Existed_Raid", 00:10:01.277 "uuid": "5e9e05be-bd9d-4229-881b-205440d0f886", 00:10:01.277 "strip_size_kb": 0, 00:10:01.277 "state": "online", 00:10:01.277 "raid_level": "raid1", 00:10:01.277 "superblock": false, 00:10:01.277 "num_base_bdevs": 3, 00:10:01.277 "num_base_bdevs_discovered": 3, 00:10:01.277 "num_base_bdevs_operational": 3, 00:10:01.277 "base_bdevs_list": [ 00:10:01.277 { 00:10:01.277 "name": "NewBaseBdev", 00:10:01.277 "uuid": "ebd11edc-9aa9-46fa-bab3-defad46d77f3", 00:10:01.277 "is_configured": true, 00:10:01.277 "data_offset": 0, 00:10:01.277 "data_size": 65536 00:10:01.277 }, 00:10:01.277 { 00:10:01.277 "name": "BaseBdev2", 00:10:01.277 "uuid": "622b1a3f-acbc-4cf9-ae45-399cfa061597", 00:10:01.277 "is_configured": true, 00:10:01.277 "data_offset": 0, 00:10:01.277 "data_size": 65536 00:10:01.277 }, 00:10:01.277 { 00:10:01.277 "name": "BaseBdev3", 00:10:01.277 "uuid": "dee4f0e5-1e1f-4599-bae1-170f51692dc9", 00:10:01.277 "is_configured": true, 00:10:01.277 "data_offset": 0, 00:10:01.277 "data_size": 65536 00:10:01.277 } 00:10:01.277 ] 00:10:01.277 }' 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.277 03:58:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.846 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.846 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.846 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.846 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.846 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.846 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.846 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.846 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 [2024-11-18 03:58:58.263141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.847 "name": "Existed_Raid", 00:10:01.847 "aliases": [ 00:10:01.847 "5e9e05be-bd9d-4229-881b-205440d0f886" 00:10:01.847 ], 00:10:01.847 "product_name": "Raid Volume", 00:10:01.847 "block_size": 512, 00:10:01.847 "num_blocks": 65536, 00:10:01.847 "uuid": "5e9e05be-bd9d-4229-881b-205440d0f886", 00:10:01.847 "assigned_rate_limits": { 00:10:01.847 "rw_ios_per_sec": 0, 00:10:01.847 "rw_mbytes_per_sec": 0, 00:10:01.847 "r_mbytes_per_sec": 0, 00:10:01.847 "w_mbytes_per_sec": 0 00:10:01.847 }, 00:10:01.847 "claimed": false, 00:10:01.847 "zoned": false, 00:10:01.847 "supported_io_types": { 00:10:01.847 "read": true, 00:10:01.847 "write": true, 00:10:01.847 "unmap": false, 00:10:01.847 "flush": false, 00:10:01.847 "reset": true, 00:10:01.847 "nvme_admin": false, 00:10:01.847 "nvme_io": false, 00:10:01.847 "nvme_io_md": false, 00:10:01.847 "write_zeroes": true, 00:10:01.847 "zcopy": false, 00:10:01.847 "get_zone_info": false, 00:10:01.847 "zone_management": false, 00:10:01.847 "zone_append": false, 00:10:01.847 "compare": false, 00:10:01.847 "compare_and_write": false, 00:10:01.847 "abort": false, 00:10:01.847 "seek_hole": false, 00:10:01.847 "seek_data": false, 00:10:01.847 "copy": false, 00:10:01.847 "nvme_iov_md": false 00:10:01.847 }, 00:10:01.847 "memory_domains": [ 00:10:01.847 { 00:10:01.847 "dma_device_id": "system", 00:10:01.847 "dma_device_type": 1 00:10:01.847 }, 00:10:01.847 { 00:10:01.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.847 "dma_device_type": 2 00:10:01.847 }, 00:10:01.847 { 00:10:01.847 "dma_device_id": "system", 00:10:01.847 "dma_device_type": 1 00:10:01.847 }, 00:10:01.847 { 00:10:01.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.847 "dma_device_type": 2 00:10:01.847 }, 00:10:01.847 { 00:10:01.847 "dma_device_id": "system", 00:10:01.847 "dma_device_type": 1 00:10:01.847 }, 00:10:01.847 { 00:10:01.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.847 "dma_device_type": 2 00:10:01.847 } 00:10:01.847 ], 00:10:01.847 "driver_specific": { 00:10:01.847 "raid": { 00:10:01.847 "uuid": "5e9e05be-bd9d-4229-881b-205440d0f886", 00:10:01.847 "strip_size_kb": 0, 00:10:01.847 "state": "online", 00:10:01.847 "raid_level": "raid1", 00:10:01.847 "superblock": false, 00:10:01.847 "num_base_bdevs": 3, 00:10:01.847 "num_base_bdevs_discovered": 3, 00:10:01.847 "num_base_bdevs_operational": 3, 00:10:01.847 "base_bdevs_list": [ 00:10:01.847 { 00:10:01.847 "name": "NewBaseBdev", 00:10:01.847 "uuid": "ebd11edc-9aa9-46fa-bab3-defad46d77f3", 00:10:01.847 "is_configured": true, 00:10:01.847 "data_offset": 0, 00:10:01.847 "data_size": 65536 00:10:01.847 }, 00:10:01.847 { 00:10:01.847 "name": "BaseBdev2", 00:10:01.847 "uuid": "622b1a3f-acbc-4cf9-ae45-399cfa061597", 00:10:01.847 "is_configured": true, 00:10:01.847 "data_offset": 0, 00:10:01.847 "data_size": 65536 00:10:01.847 }, 00:10:01.847 { 00:10:01.847 "name": "BaseBdev3", 00:10:01.847 "uuid": "dee4f0e5-1e1f-4599-bae1-170f51692dc9", 00:10:01.847 "is_configured": true, 00:10:01.847 "data_offset": 0, 00:10:01.847 "data_size": 65536 00:10:01.847 } 00:10:01.847 ] 00:10:01.847 } 00:10:01.847 } 00:10:01.847 }' 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:01.847 BaseBdev2 00:10:01.847 BaseBdev3' 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.847 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.107 [2024-11-18 03:58:58.526283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.107 [2024-11-18 03:58:58.526335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.107 [2024-11-18 03:58:58.526426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.107 [2024-11-18 03:58:58.526752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.107 [2024-11-18 03:58:58.526763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67374 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67374 ']' 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67374 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67374 00:10:02.107 killing process with pid 67374 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67374' 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67374 00:10:02.107 [2024-11-18 03:58:58.566890] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.107 03:58:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67374 00:10:02.366 [2024-11-18 03:58:58.956839] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:03.744 00:10:03.744 real 0m11.162s 00:10:03.744 user 0m17.385s 00:10:03.744 sys 0m1.955s 00:10:03.744 ************************************ 00:10:03.744 END TEST raid_state_function_test 00:10:03.744 ************************************ 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.744 03:59:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:03.744 03:59:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.744 03:59:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.744 03:59:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.744 ************************************ 00:10:03.744 START TEST raid_state_function_test_sb 00:10:03.744 ************************************ 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68006 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68006' 00:10:03.744 Process raid pid: 68006 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68006 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68006 ']' 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.744 03:59:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.004 [2024-11-18 03:59:00.474898] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:04.004 [2024-11-18 03:59:00.475124] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.263 [2024-11-18 03:59:00.649216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.263 [2024-11-18 03:59:00.796741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.523 [2024-11-18 03:59:01.069617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.523 [2024-11-18 03:59:01.069769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.782 [2024-11-18 03:59:01.302467] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.782 [2024-11-18 03:59:01.302613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.782 [2024-11-18 03:59:01.302652] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.782 [2024-11-18 03:59:01.302687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.782 [2024-11-18 03:59:01.302723] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.782 [2024-11-18 03:59:01.302757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.782 "name": "Existed_Raid", 00:10:04.782 "uuid": "1113dcc7-62d5-440b-9097-8c5e1f671fb7", 00:10:04.782 "strip_size_kb": 0, 00:10:04.782 "state": "configuring", 00:10:04.782 "raid_level": "raid1", 00:10:04.782 "superblock": true, 00:10:04.782 "num_base_bdevs": 3, 00:10:04.782 "num_base_bdevs_discovered": 0, 00:10:04.782 "num_base_bdevs_operational": 3, 00:10:04.782 "base_bdevs_list": [ 00:10:04.782 { 00:10:04.782 "name": "BaseBdev1", 00:10:04.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.782 "is_configured": false, 00:10:04.782 "data_offset": 0, 00:10:04.782 "data_size": 0 00:10:04.782 }, 00:10:04.782 { 00:10:04.782 "name": "BaseBdev2", 00:10:04.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.782 "is_configured": false, 00:10:04.782 "data_offset": 0, 00:10:04.782 "data_size": 0 00:10:04.782 }, 00:10:04.782 { 00:10:04.782 "name": "BaseBdev3", 00:10:04.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.782 "is_configured": false, 00:10:04.782 "data_offset": 0, 00:10:04.782 "data_size": 0 00:10:04.782 } 00:10:04.782 ] 00:10:04.782 }' 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.782 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.351 [2024-11-18 03:59:01.753676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.351 [2024-11-18 03:59:01.753817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.351 [2024-11-18 03:59:01.765611] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.351 [2024-11-18 03:59:01.765714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.351 [2024-11-18 03:59:01.765750] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.351 [2024-11-18 03:59:01.765778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.351 [2024-11-18 03:59:01.765805] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.351 [2024-11-18 03:59:01.765866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.351 [2024-11-18 03:59:01.822979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.351 BaseBdev1 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.351 [ 00:10:05.351 { 00:10:05.351 "name": "BaseBdev1", 00:10:05.351 "aliases": [ 00:10:05.351 "f2ee4e15-0aef-4a35-a232-df83a20be600" 00:10:05.351 ], 00:10:05.351 "product_name": "Malloc disk", 00:10:05.351 "block_size": 512, 00:10:05.351 "num_blocks": 65536, 00:10:05.351 "uuid": "f2ee4e15-0aef-4a35-a232-df83a20be600", 00:10:05.351 "assigned_rate_limits": { 00:10:05.351 "rw_ios_per_sec": 0, 00:10:05.351 "rw_mbytes_per_sec": 0, 00:10:05.351 "r_mbytes_per_sec": 0, 00:10:05.351 "w_mbytes_per_sec": 0 00:10:05.351 }, 00:10:05.351 "claimed": true, 00:10:05.351 "claim_type": "exclusive_write", 00:10:05.351 "zoned": false, 00:10:05.351 "supported_io_types": { 00:10:05.351 "read": true, 00:10:05.351 "write": true, 00:10:05.351 "unmap": true, 00:10:05.351 "flush": true, 00:10:05.351 "reset": true, 00:10:05.351 "nvme_admin": false, 00:10:05.351 "nvme_io": false, 00:10:05.351 "nvme_io_md": false, 00:10:05.351 "write_zeroes": true, 00:10:05.351 "zcopy": true, 00:10:05.351 "get_zone_info": false, 00:10:05.351 "zone_management": false, 00:10:05.351 "zone_append": false, 00:10:05.351 "compare": false, 00:10:05.351 "compare_and_write": false, 00:10:05.351 "abort": true, 00:10:05.351 "seek_hole": false, 00:10:05.351 "seek_data": false, 00:10:05.351 "copy": true, 00:10:05.351 "nvme_iov_md": false 00:10:05.351 }, 00:10:05.351 "memory_domains": [ 00:10:05.351 { 00:10:05.351 "dma_device_id": "system", 00:10:05.351 "dma_device_type": 1 00:10:05.351 }, 00:10:05.351 { 00:10:05.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.351 "dma_device_type": 2 00:10:05.351 } 00:10:05.351 ], 00:10:05.351 "driver_specific": {} 00:10:05.351 } 00:10:05.351 ] 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.351 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.351 "name": "Existed_Raid", 00:10:05.351 "uuid": "e3525956-8228-4045-9c32-8bde0ac8a51d", 00:10:05.351 "strip_size_kb": 0, 00:10:05.351 "state": "configuring", 00:10:05.351 "raid_level": "raid1", 00:10:05.351 "superblock": true, 00:10:05.351 "num_base_bdevs": 3, 00:10:05.351 "num_base_bdevs_discovered": 1, 00:10:05.351 "num_base_bdevs_operational": 3, 00:10:05.351 "base_bdevs_list": [ 00:10:05.351 { 00:10:05.351 "name": "BaseBdev1", 00:10:05.351 "uuid": "f2ee4e15-0aef-4a35-a232-df83a20be600", 00:10:05.351 "is_configured": true, 00:10:05.352 "data_offset": 2048, 00:10:05.352 "data_size": 63488 00:10:05.352 }, 00:10:05.352 { 00:10:05.352 "name": "BaseBdev2", 00:10:05.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.352 "is_configured": false, 00:10:05.352 "data_offset": 0, 00:10:05.352 "data_size": 0 00:10:05.352 }, 00:10:05.352 { 00:10:05.352 "name": "BaseBdev3", 00:10:05.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.352 "is_configured": false, 00:10:05.352 "data_offset": 0, 00:10:05.352 "data_size": 0 00:10:05.352 } 00:10:05.352 ] 00:10:05.352 }' 00:10:05.352 03:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.352 03:59:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.920 [2024-11-18 03:59:02.318294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.920 [2024-11-18 03:59:02.318466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.920 [2024-11-18 03:59:02.330288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.920 [2024-11-18 03:59:02.332592] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.920 [2024-11-18 03:59:02.332694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.920 [2024-11-18 03:59:02.332740] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.920 [2024-11-18 03:59:02.332763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.920 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.921 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.921 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.921 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.921 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.921 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.921 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.921 "name": "Existed_Raid", 00:10:05.921 "uuid": "34f22e60-7f93-4a61-a3b5-225f0ae7d8ff", 00:10:05.921 "strip_size_kb": 0, 00:10:05.921 "state": "configuring", 00:10:05.921 "raid_level": "raid1", 00:10:05.921 "superblock": true, 00:10:05.921 "num_base_bdevs": 3, 00:10:05.921 "num_base_bdevs_discovered": 1, 00:10:05.921 "num_base_bdevs_operational": 3, 00:10:05.921 "base_bdevs_list": [ 00:10:05.921 { 00:10:05.921 "name": "BaseBdev1", 00:10:05.921 "uuid": "f2ee4e15-0aef-4a35-a232-df83a20be600", 00:10:05.921 "is_configured": true, 00:10:05.921 "data_offset": 2048, 00:10:05.921 "data_size": 63488 00:10:05.921 }, 00:10:05.921 { 00:10:05.921 "name": "BaseBdev2", 00:10:05.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.921 "is_configured": false, 00:10:05.921 "data_offset": 0, 00:10:05.921 "data_size": 0 00:10:05.921 }, 00:10:05.921 { 00:10:05.921 "name": "BaseBdev3", 00:10:05.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.921 "is_configured": false, 00:10:05.921 "data_offset": 0, 00:10:05.921 "data_size": 0 00:10:05.921 } 00:10:05.921 ] 00:10:05.921 }' 00:10:05.921 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.921 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.180 [2024-11-18 03:59:02.801309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.180 BaseBdev2 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.180 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.440 [ 00:10:06.440 { 00:10:06.440 "name": "BaseBdev2", 00:10:06.440 "aliases": [ 00:10:06.440 "76b21cbb-7f08-402b-87ed-774804ab6700" 00:10:06.440 ], 00:10:06.440 "product_name": "Malloc disk", 00:10:06.440 "block_size": 512, 00:10:06.440 "num_blocks": 65536, 00:10:06.440 "uuid": "76b21cbb-7f08-402b-87ed-774804ab6700", 00:10:06.440 "assigned_rate_limits": { 00:10:06.440 "rw_ios_per_sec": 0, 00:10:06.440 "rw_mbytes_per_sec": 0, 00:10:06.440 "r_mbytes_per_sec": 0, 00:10:06.440 "w_mbytes_per_sec": 0 00:10:06.440 }, 00:10:06.440 "claimed": true, 00:10:06.440 "claim_type": "exclusive_write", 00:10:06.440 "zoned": false, 00:10:06.440 "supported_io_types": { 00:10:06.440 "read": true, 00:10:06.440 "write": true, 00:10:06.440 "unmap": true, 00:10:06.440 "flush": true, 00:10:06.440 "reset": true, 00:10:06.440 "nvme_admin": false, 00:10:06.440 "nvme_io": false, 00:10:06.440 "nvme_io_md": false, 00:10:06.440 "write_zeroes": true, 00:10:06.440 "zcopy": true, 00:10:06.440 "get_zone_info": false, 00:10:06.440 "zone_management": false, 00:10:06.440 "zone_append": false, 00:10:06.440 "compare": false, 00:10:06.440 "compare_and_write": false, 00:10:06.440 "abort": true, 00:10:06.440 "seek_hole": false, 00:10:06.440 "seek_data": false, 00:10:06.440 "copy": true, 00:10:06.440 "nvme_iov_md": false 00:10:06.440 }, 00:10:06.440 "memory_domains": [ 00:10:06.440 { 00:10:06.440 "dma_device_id": "system", 00:10:06.440 "dma_device_type": 1 00:10:06.440 }, 00:10:06.440 { 00:10:06.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.440 "dma_device_type": 2 00:10:06.440 } 00:10:06.440 ], 00:10:06.440 "driver_specific": {} 00:10:06.440 } 00:10:06.440 ] 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.440 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.440 "name": "Existed_Raid", 00:10:06.440 "uuid": "34f22e60-7f93-4a61-a3b5-225f0ae7d8ff", 00:10:06.440 "strip_size_kb": 0, 00:10:06.440 "state": "configuring", 00:10:06.440 "raid_level": "raid1", 00:10:06.440 "superblock": true, 00:10:06.440 "num_base_bdevs": 3, 00:10:06.440 "num_base_bdevs_discovered": 2, 00:10:06.440 "num_base_bdevs_operational": 3, 00:10:06.440 "base_bdevs_list": [ 00:10:06.440 { 00:10:06.440 "name": "BaseBdev1", 00:10:06.440 "uuid": "f2ee4e15-0aef-4a35-a232-df83a20be600", 00:10:06.440 "is_configured": true, 00:10:06.440 "data_offset": 2048, 00:10:06.440 "data_size": 63488 00:10:06.440 }, 00:10:06.440 { 00:10:06.440 "name": "BaseBdev2", 00:10:06.440 "uuid": "76b21cbb-7f08-402b-87ed-774804ab6700", 00:10:06.440 "is_configured": true, 00:10:06.440 "data_offset": 2048, 00:10:06.440 "data_size": 63488 00:10:06.440 }, 00:10:06.440 { 00:10:06.440 "name": "BaseBdev3", 00:10:06.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.440 "is_configured": false, 00:10:06.441 "data_offset": 0, 00:10:06.441 "data_size": 0 00:10:06.441 } 00:10:06.441 ] 00:10:06.441 }' 00:10:06.441 03:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.441 03:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.700 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.700 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.700 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.700 [2024-11-18 03:59:03.338013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.700 [2024-11-18 03:59:03.338409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:06.700 [2024-11-18 03:59:03.338471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:06.700 [2024-11-18 03:59:03.338800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:06.700 BaseBdev3 00:10:06.700 [2024-11-18 03:59:03.339018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:06.700 [2024-11-18 03:59:03.339030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:06.960 [2024-11-18 03:59:03.339191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.961 [ 00:10:06.961 { 00:10:06.961 "name": "BaseBdev3", 00:10:06.961 "aliases": [ 00:10:06.961 "22e50fe3-faab-4bc0-89a3-76b13aeace31" 00:10:06.961 ], 00:10:06.961 "product_name": "Malloc disk", 00:10:06.961 "block_size": 512, 00:10:06.961 "num_blocks": 65536, 00:10:06.961 "uuid": "22e50fe3-faab-4bc0-89a3-76b13aeace31", 00:10:06.961 "assigned_rate_limits": { 00:10:06.961 "rw_ios_per_sec": 0, 00:10:06.961 "rw_mbytes_per_sec": 0, 00:10:06.961 "r_mbytes_per_sec": 0, 00:10:06.961 "w_mbytes_per_sec": 0 00:10:06.961 }, 00:10:06.961 "claimed": true, 00:10:06.961 "claim_type": "exclusive_write", 00:10:06.961 "zoned": false, 00:10:06.961 "supported_io_types": { 00:10:06.961 "read": true, 00:10:06.961 "write": true, 00:10:06.961 "unmap": true, 00:10:06.961 "flush": true, 00:10:06.961 "reset": true, 00:10:06.961 "nvme_admin": false, 00:10:06.961 "nvme_io": false, 00:10:06.961 "nvme_io_md": false, 00:10:06.961 "write_zeroes": true, 00:10:06.961 "zcopy": true, 00:10:06.961 "get_zone_info": false, 00:10:06.961 "zone_management": false, 00:10:06.961 "zone_append": false, 00:10:06.961 "compare": false, 00:10:06.961 "compare_and_write": false, 00:10:06.961 "abort": true, 00:10:06.961 "seek_hole": false, 00:10:06.961 "seek_data": false, 00:10:06.961 "copy": true, 00:10:06.961 "nvme_iov_md": false 00:10:06.961 }, 00:10:06.961 "memory_domains": [ 00:10:06.961 { 00:10:06.961 "dma_device_id": "system", 00:10:06.961 "dma_device_type": 1 00:10:06.961 }, 00:10:06.961 { 00:10:06.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.961 "dma_device_type": 2 00:10:06.961 } 00:10:06.961 ], 00:10:06.961 "driver_specific": {} 00:10:06.961 } 00:10:06.961 ] 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.961 "name": "Existed_Raid", 00:10:06.961 "uuid": "34f22e60-7f93-4a61-a3b5-225f0ae7d8ff", 00:10:06.961 "strip_size_kb": 0, 00:10:06.961 "state": "online", 00:10:06.961 "raid_level": "raid1", 00:10:06.961 "superblock": true, 00:10:06.961 "num_base_bdevs": 3, 00:10:06.961 "num_base_bdevs_discovered": 3, 00:10:06.961 "num_base_bdevs_operational": 3, 00:10:06.961 "base_bdevs_list": [ 00:10:06.961 { 00:10:06.961 "name": "BaseBdev1", 00:10:06.961 "uuid": "f2ee4e15-0aef-4a35-a232-df83a20be600", 00:10:06.961 "is_configured": true, 00:10:06.961 "data_offset": 2048, 00:10:06.961 "data_size": 63488 00:10:06.961 }, 00:10:06.961 { 00:10:06.961 "name": "BaseBdev2", 00:10:06.961 "uuid": "76b21cbb-7f08-402b-87ed-774804ab6700", 00:10:06.961 "is_configured": true, 00:10:06.961 "data_offset": 2048, 00:10:06.961 "data_size": 63488 00:10:06.961 }, 00:10:06.961 { 00:10:06.961 "name": "BaseBdev3", 00:10:06.961 "uuid": "22e50fe3-faab-4bc0-89a3-76b13aeace31", 00:10:06.961 "is_configured": true, 00:10:06.961 "data_offset": 2048, 00:10:06.961 "data_size": 63488 00:10:06.961 } 00:10:06.961 ] 00:10:06.961 }' 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.961 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.221 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.221 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.221 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.221 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.221 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.221 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.221 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.221 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.221 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.221 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.481 [2024-11-18 03:59:03.865556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.481 "name": "Existed_Raid", 00:10:07.481 "aliases": [ 00:10:07.481 "34f22e60-7f93-4a61-a3b5-225f0ae7d8ff" 00:10:07.481 ], 00:10:07.481 "product_name": "Raid Volume", 00:10:07.481 "block_size": 512, 00:10:07.481 "num_blocks": 63488, 00:10:07.481 "uuid": "34f22e60-7f93-4a61-a3b5-225f0ae7d8ff", 00:10:07.481 "assigned_rate_limits": { 00:10:07.481 "rw_ios_per_sec": 0, 00:10:07.481 "rw_mbytes_per_sec": 0, 00:10:07.481 "r_mbytes_per_sec": 0, 00:10:07.481 "w_mbytes_per_sec": 0 00:10:07.481 }, 00:10:07.481 "claimed": false, 00:10:07.481 "zoned": false, 00:10:07.481 "supported_io_types": { 00:10:07.481 "read": true, 00:10:07.481 "write": true, 00:10:07.481 "unmap": false, 00:10:07.481 "flush": false, 00:10:07.481 "reset": true, 00:10:07.481 "nvme_admin": false, 00:10:07.481 "nvme_io": false, 00:10:07.481 "nvme_io_md": false, 00:10:07.481 "write_zeroes": true, 00:10:07.481 "zcopy": false, 00:10:07.481 "get_zone_info": false, 00:10:07.481 "zone_management": false, 00:10:07.481 "zone_append": false, 00:10:07.481 "compare": false, 00:10:07.481 "compare_and_write": false, 00:10:07.481 "abort": false, 00:10:07.481 "seek_hole": false, 00:10:07.481 "seek_data": false, 00:10:07.481 "copy": false, 00:10:07.481 "nvme_iov_md": false 00:10:07.481 }, 00:10:07.481 "memory_domains": [ 00:10:07.481 { 00:10:07.481 "dma_device_id": "system", 00:10:07.481 "dma_device_type": 1 00:10:07.481 }, 00:10:07.481 { 00:10:07.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.481 "dma_device_type": 2 00:10:07.481 }, 00:10:07.481 { 00:10:07.481 "dma_device_id": "system", 00:10:07.481 "dma_device_type": 1 00:10:07.481 }, 00:10:07.481 { 00:10:07.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.481 "dma_device_type": 2 00:10:07.481 }, 00:10:07.481 { 00:10:07.481 "dma_device_id": "system", 00:10:07.481 "dma_device_type": 1 00:10:07.481 }, 00:10:07.481 { 00:10:07.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.481 "dma_device_type": 2 00:10:07.481 } 00:10:07.481 ], 00:10:07.481 "driver_specific": { 00:10:07.481 "raid": { 00:10:07.481 "uuid": "34f22e60-7f93-4a61-a3b5-225f0ae7d8ff", 00:10:07.481 "strip_size_kb": 0, 00:10:07.481 "state": "online", 00:10:07.481 "raid_level": "raid1", 00:10:07.481 "superblock": true, 00:10:07.481 "num_base_bdevs": 3, 00:10:07.481 "num_base_bdevs_discovered": 3, 00:10:07.481 "num_base_bdevs_operational": 3, 00:10:07.481 "base_bdevs_list": [ 00:10:07.481 { 00:10:07.481 "name": "BaseBdev1", 00:10:07.481 "uuid": "f2ee4e15-0aef-4a35-a232-df83a20be600", 00:10:07.481 "is_configured": true, 00:10:07.481 "data_offset": 2048, 00:10:07.481 "data_size": 63488 00:10:07.481 }, 00:10:07.481 { 00:10:07.481 "name": "BaseBdev2", 00:10:07.481 "uuid": "76b21cbb-7f08-402b-87ed-774804ab6700", 00:10:07.481 "is_configured": true, 00:10:07.481 "data_offset": 2048, 00:10:07.481 "data_size": 63488 00:10:07.481 }, 00:10:07.481 { 00:10:07.481 "name": "BaseBdev3", 00:10:07.481 "uuid": "22e50fe3-faab-4bc0-89a3-76b13aeace31", 00:10:07.481 "is_configured": true, 00:10:07.481 "data_offset": 2048, 00:10:07.481 "data_size": 63488 00:10:07.481 } 00:10:07.481 ] 00:10:07.481 } 00:10:07.481 } 00:10:07.481 }' 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:07.481 BaseBdev2 00:10:07.481 BaseBdev3' 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.481 03:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.481 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.482 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.482 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.482 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.482 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.482 [2024-11-18 03:59:04.092924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.742 "name": "Existed_Raid", 00:10:07.742 "uuid": "34f22e60-7f93-4a61-a3b5-225f0ae7d8ff", 00:10:07.742 "strip_size_kb": 0, 00:10:07.742 "state": "online", 00:10:07.742 "raid_level": "raid1", 00:10:07.742 "superblock": true, 00:10:07.742 "num_base_bdevs": 3, 00:10:07.742 "num_base_bdevs_discovered": 2, 00:10:07.742 "num_base_bdevs_operational": 2, 00:10:07.742 "base_bdevs_list": [ 00:10:07.742 { 00:10:07.742 "name": null, 00:10:07.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.742 "is_configured": false, 00:10:07.742 "data_offset": 0, 00:10:07.742 "data_size": 63488 00:10:07.742 }, 00:10:07.742 { 00:10:07.742 "name": "BaseBdev2", 00:10:07.742 "uuid": "76b21cbb-7f08-402b-87ed-774804ab6700", 00:10:07.742 "is_configured": true, 00:10:07.742 "data_offset": 2048, 00:10:07.742 "data_size": 63488 00:10:07.742 }, 00:10:07.742 { 00:10:07.742 "name": "BaseBdev3", 00:10:07.742 "uuid": "22e50fe3-faab-4bc0-89a3-76b13aeace31", 00:10:07.742 "is_configured": true, 00:10:07.742 "data_offset": 2048, 00:10:07.742 "data_size": 63488 00:10:07.742 } 00:10:07.742 ] 00:10:07.742 }' 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.742 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.002 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:08.002 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.002 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.002 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.002 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.002 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.263 [2024-11-18 03:59:04.681752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.263 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.263 [2024-11-18 03:59:04.832660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.263 [2024-11-18 03:59:04.832891] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.523 [2024-11-18 03:59:04.935642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.523 [2024-11-18 03:59:04.935815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.523 [2024-11-18 03:59:04.935879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.523 03:59:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.523 BaseBdev2 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.523 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.523 [ 00:10:08.523 { 00:10:08.523 "name": "BaseBdev2", 00:10:08.523 "aliases": [ 00:10:08.523 "fbb56e62-4874-46c7-be8c-07fab6d49c30" 00:10:08.523 ], 00:10:08.523 "product_name": "Malloc disk", 00:10:08.523 "block_size": 512, 00:10:08.523 "num_blocks": 65536, 00:10:08.523 "uuid": "fbb56e62-4874-46c7-be8c-07fab6d49c30", 00:10:08.523 "assigned_rate_limits": { 00:10:08.523 "rw_ios_per_sec": 0, 00:10:08.523 "rw_mbytes_per_sec": 0, 00:10:08.523 "r_mbytes_per_sec": 0, 00:10:08.523 "w_mbytes_per_sec": 0 00:10:08.523 }, 00:10:08.523 "claimed": false, 00:10:08.523 "zoned": false, 00:10:08.523 "supported_io_types": { 00:10:08.523 "read": true, 00:10:08.523 "write": true, 00:10:08.523 "unmap": true, 00:10:08.523 "flush": true, 00:10:08.523 "reset": true, 00:10:08.523 "nvme_admin": false, 00:10:08.523 "nvme_io": false, 00:10:08.523 "nvme_io_md": false, 00:10:08.523 "write_zeroes": true, 00:10:08.523 "zcopy": true, 00:10:08.523 "get_zone_info": false, 00:10:08.523 "zone_management": false, 00:10:08.523 "zone_append": false, 00:10:08.523 "compare": false, 00:10:08.523 "compare_and_write": false, 00:10:08.523 "abort": true, 00:10:08.523 "seek_hole": false, 00:10:08.523 "seek_data": false, 00:10:08.523 "copy": true, 00:10:08.523 "nvme_iov_md": false 00:10:08.523 }, 00:10:08.523 "memory_domains": [ 00:10:08.523 { 00:10:08.523 "dma_device_id": "system", 00:10:08.523 "dma_device_type": 1 00:10:08.523 }, 00:10:08.523 { 00:10:08.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.523 "dma_device_type": 2 00:10:08.523 } 00:10:08.523 ], 00:10:08.523 "driver_specific": {} 00:10:08.524 } 00:10:08.524 ] 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.524 BaseBdev3 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.524 [ 00:10:08.524 { 00:10:08.524 "name": "BaseBdev3", 00:10:08.524 "aliases": [ 00:10:08.524 "1438aec7-f12d-4338-8ae8-3428281e7442" 00:10:08.524 ], 00:10:08.524 "product_name": "Malloc disk", 00:10:08.524 "block_size": 512, 00:10:08.524 "num_blocks": 65536, 00:10:08.524 "uuid": "1438aec7-f12d-4338-8ae8-3428281e7442", 00:10:08.524 "assigned_rate_limits": { 00:10:08.524 "rw_ios_per_sec": 0, 00:10:08.524 "rw_mbytes_per_sec": 0, 00:10:08.524 "r_mbytes_per_sec": 0, 00:10:08.524 "w_mbytes_per_sec": 0 00:10:08.524 }, 00:10:08.524 "claimed": false, 00:10:08.524 "zoned": false, 00:10:08.524 "supported_io_types": { 00:10:08.524 "read": true, 00:10:08.524 "write": true, 00:10:08.524 "unmap": true, 00:10:08.524 "flush": true, 00:10:08.524 "reset": true, 00:10:08.524 "nvme_admin": false, 00:10:08.524 "nvme_io": false, 00:10:08.524 "nvme_io_md": false, 00:10:08.524 "write_zeroes": true, 00:10:08.524 "zcopy": true, 00:10:08.524 "get_zone_info": false, 00:10:08.524 "zone_management": false, 00:10:08.524 "zone_append": false, 00:10:08.524 "compare": false, 00:10:08.524 "compare_and_write": false, 00:10:08.524 "abort": true, 00:10:08.524 "seek_hole": false, 00:10:08.524 "seek_data": false, 00:10:08.524 "copy": true, 00:10:08.524 "nvme_iov_md": false 00:10:08.524 }, 00:10:08.524 "memory_domains": [ 00:10:08.524 { 00:10:08.524 "dma_device_id": "system", 00:10:08.524 "dma_device_type": 1 00:10:08.524 }, 00:10:08.524 { 00:10:08.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.524 "dma_device_type": 2 00:10:08.524 } 00:10:08.524 ], 00:10:08.524 "driver_specific": {} 00:10:08.524 } 00:10:08.524 ] 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.524 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.524 [2024-11-18 03:59:05.160966] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.524 [2024-11-18 03:59:05.161092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.524 [2024-11-18 03:59:05.161131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.784 [2024-11-18 03:59:05.163220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.784 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.784 "name": "Existed_Raid", 00:10:08.784 "uuid": "665057ec-2435-47c9-a0a0-847d256f3f40", 00:10:08.784 "strip_size_kb": 0, 00:10:08.784 "state": "configuring", 00:10:08.784 "raid_level": "raid1", 00:10:08.784 "superblock": true, 00:10:08.784 "num_base_bdevs": 3, 00:10:08.784 "num_base_bdevs_discovered": 2, 00:10:08.784 "num_base_bdevs_operational": 3, 00:10:08.784 "base_bdevs_list": [ 00:10:08.784 { 00:10:08.784 "name": "BaseBdev1", 00:10:08.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.784 "is_configured": false, 00:10:08.784 "data_offset": 0, 00:10:08.784 "data_size": 0 00:10:08.784 }, 00:10:08.784 { 00:10:08.784 "name": "BaseBdev2", 00:10:08.784 "uuid": "fbb56e62-4874-46c7-be8c-07fab6d49c30", 00:10:08.784 "is_configured": true, 00:10:08.784 "data_offset": 2048, 00:10:08.784 "data_size": 63488 00:10:08.784 }, 00:10:08.784 { 00:10:08.784 "name": "BaseBdev3", 00:10:08.784 "uuid": "1438aec7-f12d-4338-8ae8-3428281e7442", 00:10:08.784 "is_configured": true, 00:10:08.784 "data_offset": 2048, 00:10:08.784 "data_size": 63488 00:10:08.784 } 00:10:08.784 ] 00:10:08.784 }' 00:10:08.785 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.785 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.044 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:09.044 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.045 [2024-11-18 03:59:05.616250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.045 "name": "Existed_Raid", 00:10:09.045 "uuid": "665057ec-2435-47c9-a0a0-847d256f3f40", 00:10:09.045 "strip_size_kb": 0, 00:10:09.045 "state": "configuring", 00:10:09.045 "raid_level": "raid1", 00:10:09.045 "superblock": true, 00:10:09.045 "num_base_bdevs": 3, 00:10:09.045 "num_base_bdevs_discovered": 1, 00:10:09.045 "num_base_bdevs_operational": 3, 00:10:09.045 "base_bdevs_list": [ 00:10:09.045 { 00:10:09.045 "name": "BaseBdev1", 00:10:09.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.045 "is_configured": false, 00:10:09.045 "data_offset": 0, 00:10:09.045 "data_size": 0 00:10:09.045 }, 00:10:09.045 { 00:10:09.045 "name": null, 00:10:09.045 "uuid": "fbb56e62-4874-46c7-be8c-07fab6d49c30", 00:10:09.045 "is_configured": false, 00:10:09.045 "data_offset": 0, 00:10:09.045 "data_size": 63488 00:10:09.045 }, 00:10:09.045 { 00:10:09.045 "name": "BaseBdev3", 00:10:09.045 "uuid": "1438aec7-f12d-4338-8ae8-3428281e7442", 00:10:09.045 "is_configured": true, 00:10:09.045 "data_offset": 2048, 00:10:09.045 "data_size": 63488 00:10:09.045 } 00:10:09.045 ] 00:10:09.045 }' 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.045 03:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.615 [2024-11-18 03:59:06.122254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.615 BaseBdev1 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.615 [ 00:10:09.615 { 00:10:09.615 "name": "BaseBdev1", 00:10:09.615 "aliases": [ 00:10:09.615 "7c0d85b7-d79e-4337-8cde-19b44348346f" 00:10:09.615 ], 00:10:09.615 "product_name": "Malloc disk", 00:10:09.615 "block_size": 512, 00:10:09.615 "num_blocks": 65536, 00:10:09.615 "uuid": "7c0d85b7-d79e-4337-8cde-19b44348346f", 00:10:09.615 "assigned_rate_limits": { 00:10:09.615 "rw_ios_per_sec": 0, 00:10:09.615 "rw_mbytes_per_sec": 0, 00:10:09.615 "r_mbytes_per_sec": 0, 00:10:09.615 "w_mbytes_per_sec": 0 00:10:09.615 }, 00:10:09.615 "claimed": true, 00:10:09.615 "claim_type": "exclusive_write", 00:10:09.615 "zoned": false, 00:10:09.615 "supported_io_types": { 00:10:09.615 "read": true, 00:10:09.615 "write": true, 00:10:09.615 "unmap": true, 00:10:09.615 "flush": true, 00:10:09.615 "reset": true, 00:10:09.615 "nvme_admin": false, 00:10:09.615 "nvme_io": false, 00:10:09.615 "nvme_io_md": false, 00:10:09.615 "write_zeroes": true, 00:10:09.615 "zcopy": true, 00:10:09.615 "get_zone_info": false, 00:10:09.615 "zone_management": false, 00:10:09.615 "zone_append": false, 00:10:09.615 "compare": false, 00:10:09.615 "compare_and_write": false, 00:10:09.615 "abort": true, 00:10:09.615 "seek_hole": false, 00:10:09.615 "seek_data": false, 00:10:09.615 "copy": true, 00:10:09.615 "nvme_iov_md": false 00:10:09.615 }, 00:10:09.615 "memory_domains": [ 00:10:09.615 { 00:10:09.615 "dma_device_id": "system", 00:10:09.615 "dma_device_type": 1 00:10:09.615 }, 00:10:09.615 { 00:10:09.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.615 "dma_device_type": 2 00:10:09.615 } 00:10:09.615 ], 00:10:09.615 "driver_specific": {} 00:10:09.615 } 00:10:09.615 ] 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.615 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.616 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.616 "name": "Existed_Raid", 00:10:09.616 "uuid": "665057ec-2435-47c9-a0a0-847d256f3f40", 00:10:09.616 "strip_size_kb": 0, 00:10:09.616 "state": "configuring", 00:10:09.616 "raid_level": "raid1", 00:10:09.616 "superblock": true, 00:10:09.616 "num_base_bdevs": 3, 00:10:09.616 "num_base_bdevs_discovered": 2, 00:10:09.616 "num_base_bdevs_operational": 3, 00:10:09.616 "base_bdevs_list": [ 00:10:09.616 { 00:10:09.616 "name": "BaseBdev1", 00:10:09.616 "uuid": "7c0d85b7-d79e-4337-8cde-19b44348346f", 00:10:09.616 "is_configured": true, 00:10:09.616 "data_offset": 2048, 00:10:09.616 "data_size": 63488 00:10:09.616 }, 00:10:09.616 { 00:10:09.616 "name": null, 00:10:09.616 "uuid": "fbb56e62-4874-46c7-be8c-07fab6d49c30", 00:10:09.616 "is_configured": false, 00:10:09.616 "data_offset": 0, 00:10:09.616 "data_size": 63488 00:10:09.616 }, 00:10:09.616 { 00:10:09.616 "name": "BaseBdev3", 00:10:09.616 "uuid": "1438aec7-f12d-4338-8ae8-3428281e7442", 00:10:09.616 "is_configured": true, 00:10:09.616 "data_offset": 2048, 00:10:09.616 "data_size": 63488 00:10:09.616 } 00:10:09.616 ] 00:10:09.616 }' 00:10:09.616 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.616 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.185 [2024-11-18 03:59:06.641386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.185 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.185 "name": "Existed_Raid", 00:10:10.185 "uuid": "665057ec-2435-47c9-a0a0-847d256f3f40", 00:10:10.185 "strip_size_kb": 0, 00:10:10.185 "state": "configuring", 00:10:10.185 "raid_level": "raid1", 00:10:10.185 "superblock": true, 00:10:10.185 "num_base_bdevs": 3, 00:10:10.185 "num_base_bdevs_discovered": 1, 00:10:10.185 "num_base_bdevs_operational": 3, 00:10:10.185 "base_bdevs_list": [ 00:10:10.185 { 00:10:10.185 "name": "BaseBdev1", 00:10:10.185 "uuid": "7c0d85b7-d79e-4337-8cde-19b44348346f", 00:10:10.185 "is_configured": true, 00:10:10.185 "data_offset": 2048, 00:10:10.185 "data_size": 63488 00:10:10.185 }, 00:10:10.185 { 00:10:10.185 "name": null, 00:10:10.185 "uuid": "fbb56e62-4874-46c7-be8c-07fab6d49c30", 00:10:10.185 "is_configured": false, 00:10:10.185 "data_offset": 0, 00:10:10.185 "data_size": 63488 00:10:10.185 }, 00:10:10.185 { 00:10:10.185 "name": null, 00:10:10.185 "uuid": "1438aec7-f12d-4338-8ae8-3428281e7442", 00:10:10.185 "is_configured": false, 00:10:10.185 "data_offset": 0, 00:10:10.185 "data_size": 63488 00:10:10.185 } 00:10:10.186 ] 00:10:10.186 }' 00:10:10.186 03:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.186 03:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.445 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:10.445 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.445 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.445 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.445 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.445 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:10.445 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:10.445 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.445 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.445 [2024-11-18 03:59:07.084669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.705 "name": "Existed_Raid", 00:10:10.705 "uuid": "665057ec-2435-47c9-a0a0-847d256f3f40", 00:10:10.705 "strip_size_kb": 0, 00:10:10.705 "state": "configuring", 00:10:10.705 "raid_level": "raid1", 00:10:10.705 "superblock": true, 00:10:10.705 "num_base_bdevs": 3, 00:10:10.705 "num_base_bdevs_discovered": 2, 00:10:10.705 "num_base_bdevs_operational": 3, 00:10:10.705 "base_bdevs_list": [ 00:10:10.705 { 00:10:10.705 "name": "BaseBdev1", 00:10:10.705 "uuid": "7c0d85b7-d79e-4337-8cde-19b44348346f", 00:10:10.705 "is_configured": true, 00:10:10.705 "data_offset": 2048, 00:10:10.705 "data_size": 63488 00:10:10.705 }, 00:10:10.705 { 00:10:10.705 "name": null, 00:10:10.705 "uuid": "fbb56e62-4874-46c7-be8c-07fab6d49c30", 00:10:10.705 "is_configured": false, 00:10:10.705 "data_offset": 0, 00:10:10.705 "data_size": 63488 00:10:10.705 }, 00:10:10.705 { 00:10:10.705 "name": "BaseBdev3", 00:10:10.705 "uuid": "1438aec7-f12d-4338-8ae8-3428281e7442", 00:10:10.705 "is_configured": true, 00:10:10.705 "data_offset": 2048, 00:10:10.705 "data_size": 63488 00:10:10.705 } 00:10:10.705 ] 00:10:10.705 }' 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.705 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.972 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:10.972 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.972 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.972 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.972 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.972 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:10.972 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:10.972 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.972 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.972 [2024-11-18 03:59:07.579903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.237 "name": "Existed_Raid", 00:10:11.237 "uuid": "665057ec-2435-47c9-a0a0-847d256f3f40", 00:10:11.237 "strip_size_kb": 0, 00:10:11.237 "state": "configuring", 00:10:11.237 "raid_level": "raid1", 00:10:11.237 "superblock": true, 00:10:11.237 "num_base_bdevs": 3, 00:10:11.237 "num_base_bdevs_discovered": 1, 00:10:11.237 "num_base_bdevs_operational": 3, 00:10:11.237 "base_bdevs_list": [ 00:10:11.237 { 00:10:11.237 "name": null, 00:10:11.237 "uuid": "7c0d85b7-d79e-4337-8cde-19b44348346f", 00:10:11.237 "is_configured": false, 00:10:11.237 "data_offset": 0, 00:10:11.237 "data_size": 63488 00:10:11.237 }, 00:10:11.237 { 00:10:11.237 "name": null, 00:10:11.237 "uuid": "fbb56e62-4874-46c7-be8c-07fab6d49c30", 00:10:11.237 "is_configured": false, 00:10:11.237 "data_offset": 0, 00:10:11.237 "data_size": 63488 00:10:11.237 }, 00:10:11.237 { 00:10:11.237 "name": "BaseBdev3", 00:10:11.237 "uuid": "1438aec7-f12d-4338-8ae8-3428281e7442", 00:10:11.237 "is_configured": true, 00:10:11.237 "data_offset": 2048, 00:10:11.237 "data_size": 63488 00:10:11.237 } 00:10:11.237 ] 00:10:11.237 }' 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.237 03:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.497 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:11.497 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.497 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.497 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.497 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.757 [2024-11-18 03:59:08.157953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.757 "name": "Existed_Raid", 00:10:11.757 "uuid": "665057ec-2435-47c9-a0a0-847d256f3f40", 00:10:11.757 "strip_size_kb": 0, 00:10:11.757 "state": "configuring", 00:10:11.757 "raid_level": "raid1", 00:10:11.757 "superblock": true, 00:10:11.757 "num_base_bdevs": 3, 00:10:11.757 "num_base_bdevs_discovered": 2, 00:10:11.757 "num_base_bdevs_operational": 3, 00:10:11.757 "base_bdevs_list": [ 00:10:11.757 { 00:10:11.757 "name": null, 00:10:11.757 "uuid": "7c0d85b7-d79e-4337-8cde-19b44348346f", 00:10:11.757 "is_configured": false, 00:10:11.757 "data_offset": 0, 00:10:11.757 "data_size": 63488 00:10:11.757 }, 00:10:11.757 { 00:10:11.757 "name": "BaseBdev2", 00:10:11.757 "uuid": "fbb56e62-4874-46c7-be8c-07fab6d49c30", 00:10:11.757 "is_configured": true, 00:10:11.757 "data_offset": 2048, 00:10:11.757 "data_size": 63488 00:10:11.757 }, 00:10:11.757 { 00:10:11.757 "name": "BaseBdev3", 00:10:11.757 "uuid": "1438aec7-f12d-4338-8ae8-3428281e7442", 00:10:11.757 "is_configured": true, 00:10:11.757 "data_offset": 2048, 00:10:11.757 "data_size": 63488 00:10:11.757 } 00:10:11.757 ] 00:10:11.757 }' 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.757 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.016 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.017 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.017 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.017 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.017 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7c0d85b7-d79e-4337-8cde-19b44348346f 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.277 [2024-11-18 03:59:08.760073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:12.277 [2024-11-18 03:59:08.760401] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:12.277 [2024-11-18 03:59:08.760449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:12.277 [2024-11-18 03:59:08.760752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:12.277 NewBaseBdev 00:10:12.277 [2024-11-18 03:59:08.760968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:12.277 [2024-11-18 03:59:08.760986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:12.277 [2024-11-18 03:59:08.761138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.277 [ 00:10:12.277 { 00:10:12.277 "name": "NewBaseBdev", 00:10:12.277 "aliases": [ 00:10:12.277 "7c0d85b7-d79e-4337-8cde-19b44348346f" 00:10:12.277 ], 00:10:12.277 "product_name": "Malloc disk", 00:10:12.277 "block_size": 512, 00:10:12.277 "num_blocks": 65536, 00:10:12.277 "uuid": "7c0d85b7-d79e-4337-8cde-19b44348346f", 00:10:12.277 "assigned_rate_limits": { 00:10:12.277 "rw_ios_per_sec": 0, 00:10:12.277 "rw_mbytes_per_sec": 0, 00:10:12.277 "r_mbytes_per_sec": 0, 00:10:12.277 "w_mbytes_per_sec": 0 00:10:12.277 }, 00:10:12.277 "claimed": true, 00:10:12.277 "claim_type": "exclusive_write", 00:10:12.277 "zoned": false, 00:10:12.277 "supported_io_types": { 00:10:12.277 "read": true, 00:10:12.277 "write": true, 00:10:12.277 "unmap": true, 00:10:12.277 "flush": true, 00:10:12.277 "reset": true, 00:10:12.277 "nvme_admin": false, 00:10:12.277 "nvme_io": false, 00:10:12.277 "nvme_io_md": false, 00:10:12.277 "write_zeroes": true, 00:10:12.277 "zcopy": true, 00:10:12.277 "get_zone_info": false, 00:10:12.277 "zone_management": false, 00:10:12.277 "zone_append": false, 00:10:12.277 "compare": false, 00:10:12.277 "compare_and_write": false, 00:10:12.277 "abort": true, 00:10:12.277 "seek_hole": false, 00:10:12.277 "seek_data": false, 00:10:12.277 "copy": true, 00:10:12.277 "nvme_iov_md": false 00:10:12.277 }, 00:10:12.277 "memory_domains": [ 00:10:12.277 { 00:10:12.277 "dma_device_id": "system", 00:10:12.277 "dma_device_type": 1 00:10:12.277 }, 00:10:12.277 { 00:10:12.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.277 "dma_device_type": 2 00:10:12.277 } 00:10:12.277 ], 00:10:12.277 "driver_specific": {} 00:10:12.277 } 00:10:12.277 ] 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.277 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.277 "name": "Existed_Raid", 00:10:12.277 "uuid": "665057ec-2435-47c9-a0a0-847d256f3f40", 00:10:12.277 "strip_size_kb": 0, 00:10:12.277 "state": "online", 00:10:12.277 "raid_level": "raid1", 00:10:12.277 "superblock": true, 00:10:12.277 "num_base_bdevs": 3, 00:10:12.277 "num_base_bdevs_discovered": 3, 00:10:12.277 "num_base_bdevs_operational": 3, 00:10:12.277 "base_bdevs_list": [ 00:10:12.277 { 00:10:12.277 "name": "NewBaseBdev", 00:10:12.277 "uuid": "7c0d85b7-d79e-4337-8cde-19b44348346f", 00:10:12.277 "is_configured": true, 00:10:12.277 "data_offset": 2048, 00:10:12.277 "data_size": 63488 00:10:12.277 }, 00:10:12.278 { 00:10:12.278 "name": "BaseBdev2", 00:10:12.278 "uuid": "fbb56e62-4874-46c7-be8c-07fab6d49c30", 00:10:12.278 "is_configured": true, 00:10:12.278 "data_offset": 2048, 00:10:12.278 "data_size": 63488 00:10:12.278 }, 00:10:12.278 { 00:10:12.278 "name": "BaseBdev3", 00:10:12.278 "uuid": "1438aec7-f12d-4338-8ae8-3428281e7442", 00:10:12.278 "is_configured": true, 00:10:12.278 "data_offset": 2048, 00:10:12.278 "data_size": 63488 00:10:12.278 } 00:10:12.278 ] 00:10:12.278 }' 00:10:12.278 03:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.278 03:59:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.852 [2024-11-18 03:59:09.227725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.852 "name": "Existed_Raid", 00:10:12.852 "aliases": [ 00:10:12.852 "665057ec-2435-47c9-a0a0-847d256f3f40" 00:10:12.852 ], 00:10:12.852 "product_name": "Raid Volume", 00:10:12.852 "block_size": 512, 00:10:12.852 "num_blocks": 63488, 00:10:12.852 "uuid": "665057ec-2435-47c9-a0a0-847d256f3f40", 00:10:12.852 "assigned_rate_limits": { 00:10:12.852 "rw_ios_per_sec": 0, 00:10:12.852 "rw_mbytes_per_sec": 0, 00:10:12.852 "r_mbytes_per_sec": 0, 00:10:12.852 "w_mbytes_per_sec": 0 00:10:12.852 }, 00:10:12.852 "claimed": false, 00:10:12.852 "zoned": false, 00:10:12.852 "supported_io_types": { 00:10:12.852 "read": true, 00:10:12.852 "write": true, 00:10:12.852 "unmap": false, 00:10:12.852 "flush": false, 00:10:12.852 "reset": true, 00:10:12.852 "nvme_admin": false, 00:10:12.852 "nvme_io": false, 00:10:12.852 "nvme_io_md": false, 00:10:12.852 "write_zeroes": true, 00:10:12.852 "zcopy": false, 00:10:12.852 "get_zone_info": false, 00:10:12.852 "zone_management": false, 00:10:12.852 "zone_append": false, 00:10:12.852 "compare": false, 00:10:12.852 "compare_and_write": false, 00:10:12.852 "abort": false, 00:10:12.852 "seek_hole": false, 00:10:12.852 "seek_data": false, 00:10:12.852 "copy": false, 00:10:12.852 "nvme_iov_md": false 00:10:12.852 }, 00:10:12.852 "memory_domains": [ 00:10:12.852 { 00:10:12.852 "dma_device_id": "system", 00:10:12.852 "dma_device_type": 1 00:10:12.852 }, 00:10:12.852 { 00:10:12.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.852 "dma_device_type": 2 00:10:12.852 }, 00:10:12.852 { 00:10:12.852 "dma_device_id": "system", 00:10:12.852 "dma_device_type": 1 00:10:12.852 }, 00:10:12.852 { 00:10:12.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.852 "dma_device_type": 2 00:10:12.852 }, 00:10:12.852 { 00:10:12.852 "dma_device_id": "system", 00:10:12.852 "dma_device_type": 1 00:10:12.852 }, 00:10:12.852 { 00:10:12.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.852 "dma_device_type": 2 00:10:12.852 } 00:10:12.852 ], 00:10:12.852 "driver_specific": { 00:10:12.852 "raid": { 00:10:12.852 "uuid": "665057ec-2435-47c9-a0a0-847d256f3f40", 00:10:12.852 "strip_size_kb": 0, 00:10:12.852 "state": "online", 00:10:12.852 "raid_level": "raid1", 00:10:12.852 "superblock": true, 00:10:12.852 "num_base_bdevs": 3, 00:10:12.852 "num_base_bdevs_discovered": 3, 00:10:12.852 "num_base_bdevs_operational": 3, 00:10:12.852 "base_bdevs_list": [ 00:10:12.852 { 00:10:12.852 "name": "NewBaseBdev", 00:10:12.852 "uuid": "7c0d85b7-d79e-4337-8cde-19b44348346f", 00:10:12.852 "is_configured": true, 00:10:12.852 "data_offset": 2048, 00:10:12.852 "data_size": 63488 00:10:12.852 }, 00:10:12.852 { 00:10:12.852 "name": "BaseBdev2", 00:10:12.852 "uuid": "fbb56e62-4874-46c7-be8c-07fab6d49c30", 00:10:12.852 "is_configured": true, 00:10:12.852 "data_offset": 2048, 00:10:12.852 "data_size": 63488 00:10:12.852 }, 00:10:12.852 { 00:10:12.852 "name": "BaseBdev3", 00:10:12.852 "uuid": "1438aec7-f12d-4338-8ae8-3428281e7442", 00:10:12.852 "is_configured": true, 00:10:12.852 "data_offset": 2048, 00:10:12.852 "data_size": 63488 00:10:12.852 } 00:10:12.852 ] 00:10:12.852 } 00:10:12.852 } 00:10:12.852 }' 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:12.852 BaseBdev2 00:10:12.852 BaseBdev3' 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.852 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.114 [2024-11-18 03:59:09.514901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.114 [2024-11-18 03:59:09.515024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.114 [2024-11-18 03:59:09.515134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.114 [2024-11-18 03:59:09.515488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.114 [2024-11-18 03:59:09.515545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68006 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68006 ']' 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68006 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68006 00:10:13.114 killing process with pid 68006 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68006' 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68006 00:10:13.114 [2024-11-18 03:59:09.561887] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.114 03:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68006 00:10:13.374 [2024-11-18 03:59:09.893475] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.777 03:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:14.777 00:10:14.777 real 0m10.750s 00:10:14.777 user 0m16.786s 00:10:14.777 sys 0m2.012s 00:10:14.777 03:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.777 03:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.777 ************************************ 00:10:14.777 END TEST raid_state_function_test_sb 00:10:14.777 ************************************ 00:10:14.777 03:59:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:14.777 03:59:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:14.777 03:59:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.777 03:59:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.777 ************************************ 00:10:14.777 START TEST raid_superblock_test 00:10:14.777 ************************************ 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68625 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68625 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68625 ']' 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.777 03:59:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.777 [2024-11-18 03:59:11.278046] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:14.778 [2024-11-18 03:59:11.278341] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68625 ] 00:10:15.037 [2024-11-18 03:59:11.467199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.037 [2024-11-18 03:59:11.609297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.296 [2024-11-18 03:59:11.851593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.296 [2024-11-18 03:59:11.851646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.554 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.554 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:15.554 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.555 malloc1 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.555 [2024-11-18 03:59:12.142464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:15.555 [2024-11-18 03:59:12.142626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.555 [2024-11-18 03:59:12.142671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:15.555 [2024-11-18 03:59:12.142705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.555 [2024-11-18 03:59:12.145137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.555 [2024-11-18 03:59:12.145208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:15.555 pt1 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.555 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.814 malloc2 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.814 [2024-11-18 03:59:12.200803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:15.814 [2024-11-18 03:59:12.200876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.814 [2024-11-18 03:59:12.200902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:15.814 [2024-11-18 03:59:12.200911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.814 [2024-11-18 03:59:12.203261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.814 [2024-11-18 03:59:12.203357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:15.814 pt2 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.814 malloc3 00:10:15.814 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.815 [2024-11-18 03:59:12.280820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:15.815 [2024-11-18 03:59:12.280955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.815 [2024-11-18 03:59:12.281007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:15.815 [2024-11-18 03:59:12.281045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.815 [2024-11-18 03:59:12.283401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.815 [2024-11-18 03:59:12.283479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:15.815 pt3 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.815 [2024-11-18 03:59:12.292870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:15.815 [2024-11-18 03:59:12.295018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:15.815 [2024-11-18 03:59:12.295120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:15.815 [2024-11-18 03:59:12.295306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:15.815 [2024-11-18 03:59:12.295355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.815 [2024-11-18 03:59:12.295619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:15.815 [2024-11-18 03:59:12.295861] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:15.815 [2024-11-18 03:59:12.295910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:15.815 [2024-11-18 03:59:12.296094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.815 "name": "raid_bdev1", 00:10:15.815 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:15.815 "strip_size_kb": 0, 00:10:15.815 "state": "online", 00:10:15.815 "raid_level": "raid1", 00:10:15.815 "superblock": true, 00:10:15.815 "num_base_bdevs": 3, 00:10:15.815 "num_base_bdevs_discovered": 3, 00:10:15.815 "num_base_bdevs_operational": 3, 00:10:15.815 "base_bdevs_list": [ 00:10:15.815 { 00:10:15.815 "name": "pt1", 00:10:15.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.815 "is_configured": true, 00:10:15.815 "data_offset": 2048, 00:10:15.815 "data_size": 63488 00:10:15.815 }, 00:10:15.815 { 00:10:15.815 "name": "pt2", 00:10:15.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.815 "is_configured": true, 00:10:15.815 "data_offset": 2048, 00:10:15.815 "data_size": 63488 00:10:15.815 }, 00:10:15.815 { 00:10:15.815 "name": "pt3", 00:10:15.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.815 "is_configured": true, 00:10:15.815 "data_offset": 2048, 00:10:15.815 "data_size": 63488 00:10:15.815 } 00:10:15.815 ] 00:10:15.815 }' 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.815 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.386 [2024-11-18 03:59:12.748407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.386 "name": "raid_bdev1", 00:10:16.386 "aliases": [ 00:10:16.386 "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88" 00:10:16.386 ], 00:10:16.386 "product_name": "Raid Volume", 00:10:16.386 "block_size": 512, 00:10:16.386 "num_blocks": 63488, 00:10:16.386 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:16.386 "assigned_rate_limits": { 00:10:16.386 "rw_ios_per_sec": 0, 00:10:16.386 "rw_mbytes_per_sec": 0, 00:10:16.386 "r_mbytes_per_sec": 0, 00:10:16.386 "w_mbytes_per_sec": 0 00:10:16.386 }, 00:10:16.386 "claimed": false, 00:10:16.386 "zoned": false, 00:10:16.386 "supported_io_types": { 00:10:16.386 "read": true, 00:10:16.386 "write": true, 00:10:16.386 "unmap": false, 00:10:16.386 "flush": false, 00:10:16.386 "reset": true, 00:10:16.386 "nvme_admin": false, 00:10:16.386 "nvme_io": false, 00:10:16.386 "nvme_io_md": false, 00:10:16.386 "write_zeroes": true, 00:10:16.386 "zcopy": false, 00:10:16.386 "get_zone_info": false, 00:10:16.386 "zone_management": false, 00:10:16.386 "zone_append": false, 00:10:16.386 "compare": false, 00:10:16.386 "compare_and_write": false, 00:10:16.386 "abort": false, 00:10:16.386 "seek_hole": false, 00:10:16.386 "seek_data": false, 00:10:16.386 "copy": false, 00:10:16.386 "nvme_iov_md": false 00:10:16.386 }, 00:10:16.386 "memory_domains": [ 00:10:16.386 { 00:10:16.386 "dma_device_id": "system", 00:10:16.386 "dma_device_type": 1 00:10:16.386 }, 00:10:16.386 { 00:10:16.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.386 "dma_device_type": 2 00:10:16.386 }, 00:10:16.386 { 00:10:16.386 "dma_device_id": "system", 00:10:16.386 "dma_device_type": 1 00:10:16.386 }, 00:10:16.386 { 00:10:16.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.386 "dma_device_type": 2 00:10:16.386 }, 00:10:16.386 { 00:10:16.386 "dma_device_id": "system", 00:10:16.386 "dma_device_type": 1 00:10:16.386 }, 00:10:16.386 { 00:10:16.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.386 "dma_device_type": 2 00:10:16.386 } 00:10:16.386 ], 00:10:16.386 "driver_specific": { 00:10:16.386 "raid": { 00:10:16.386 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:16.386 "strip_size_kb": 0, 00:10:16.386 "state": "online", 00:10:16.386 "raid_level": "raid1", 00:10:16.386 "superblock": true, 00:10:16.386 "num_base_bdevs": 3, 00:10:16.386 "num_base_bdevs_discovered": 3, 00:10:16.386 "num_base_bdevs_operational": 3, 00:10:16.386 "base_bdevs_list": [ 00:10:16.386 { 00:10:16.386 "name": "pt1", 00:10:16.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.386 "is_configured": true, 00:10:16.386 "data_offset": 2048, 00:10:16.386 "data_size": 63488 00:10:16.386 }, 00:10:16.386 { 00:10:16.386 "name": "pt2", 00:10:16.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.386 "is_configured": true, 00:10:16.386 "data_offset": 2048, 00:10:16.386 "data_size": 63488 00:10:16.386 }, 00:10:16.386 { 00:10:16.386 "name": "pt3", 00:10:16.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.386 "is_configured": true, 00:10:16.386 "data_offset": 2048, 00:10:16.386 "data_size": 63488 00:10:16.386 } 00:10:16.386 ] 00:10:16.386 } 00:10:16.386 } 00:10:16.386 }' 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:16.386 pt2 00:10:16.386 pt3' 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.386 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.387 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:16.387 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.387 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.387 03:59:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.387 03:59:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:16.650 [2024-11-18 03:59:13.043950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a35a9ac2-ca93-4fac-a3d9-8a74497f1c88 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a35a9ac2-ca93-4fac-a3d9-8a74497f1c88 ']' 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.650 [2024-11-18 03:59:13.091609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.650 [2024-11-18 03:59:13.091648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.650 [2024-11-18 03:59:13.091744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.650 [2024-11-18 03:59:13.091829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.650 [2024-11-18 03:59:13.091938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.650 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 [2024-11-18 03:59:13.231400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:16.651 [2024-11-18 03:59:13.233619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:16.651 [2024-11-18 03:59:13.233673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:16.651 [2024-11-18 03:59:13.233724] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:16.651 [2024-11-18 03:59:13.233778] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:16.651 [2024-11-18 03:59:13.233797] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:16.651 [2024-11-18 03:59:13.233813] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.651 [2024-11-18 03:59:13.233832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:16.651 request: 00:10:16.651 { 00:10:16.651 "name": "raid_bdev1", 00:10:16.651 "raid_level": "raid1", 00:10:16.651 "base_bdevs": [ 00:10:16.651 "malloc1", 00:10:16.651 "malloc2", 00:10:16.651 "malloc3" 00:10:16.651 ], 00:10:16.651 "superblock": false, 00:10:16.651 "method": "bdev_raid_create", 00:10:16.651 "req_id": 1 00:10:16.651 } 00:10:16.651 Got JSON-RPC error response 00:10:16.651 response: 00:10:16.651 { 00:10:16.651 "code": -17, 00:10:16.651 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:16.651 } 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.651 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.651 [2024-11-18 03:59:13.287247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.651 [2024-11-18 03:59:13.287385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.651 [2024-11-18 03:59:13.287439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:16.651 [2024-11-18 03:59:13.287477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.911 [2024-11-18 03:59:13.290155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.911 [2024-11-18 03:59:13.290227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.911 [2024-11-18 03:59:13.290338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:16.911 [2024-11-18 03:59:13.290421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.911 pt1 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.911 "name": "raid_bdev1", 00:10:16.911 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:16.911 "strip_size_kb": 0, 00:10:16.911 "state": "configuring", 00:10:16.911 "raid_level": "raid1", 00:10:16.911 "superblock": true, 00:10:16.911 "num_base_bdevs": 3, 00:10:16.911 "num_base_bdevs_discovered": 1, 00:10:16.911 "num_base_bdevs_operational": 3, 00:10:16.911 "base_bdevs_list": [ 00:10:16.911 { 00:10:16.911 "name": "pt1", 00:10:16.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.911 "is_configured": true, 00:10:16.911 "data_offset": 2048, 00:10:16.911 "data_size": 63488 00:10:16.911 }, 00:10:16.911 { 00:10:16.911 "name": null, 00:10:16.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.911 "is_configured": false, 00:10:16.911 "data_offset": 2048, 00:10:16.911 "data_size": 63488 00:10:16.911 }, 00:10:16.911 { 00:10:16.911 "name": null, 00:10:16.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.911 "is_configured": false, 00:10:16.911 "data_offset": 2048, 00:10:16.911 "data_size": 63488 00:10:16.911 } 00:10:16.911 ] 00:10:16.911 }' 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.911 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.171 [2024-11-18 03:59:13.670827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.171 [2024-11-18 03:59:13.670932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.171 [2024-11-18 03:59:13.670957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:17.171 [2024-11-18 03:59:13.670968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.171 [2024-11-18 03:59:13.671505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.171 [2024-11-18 03:59:13.671536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.171 [2024-11-18 03:59:13.671642] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:17.171 [2024-11-18 03:59:13.671670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.171 pt2 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.171 [2024-11-18 03:59:13.682775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.171 "name": "raid_bdev1", 00:10:17.171 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:17.171 "strip_size_kb": 0, 00:10:17.171 "state": "configuring", 00:10:17.171 "raid_level": "raid1", 00:10:17.171 "superblock": true, 00:10:17.171 "num_base_bdevs": 3, 00:10:17.171 "num_base_bdevs_discovered": 1, 00:10:17.171 "num_base_bdevs_operational": 3, 00:10:17.171 "base_bdevs_list": [ 00:10:17.171 { 00:10:17.171 "name": "pt1", 00:10:17.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.171 "is_configured": true, 00:10:17.171 "data_offset": 2048, 00:10:17.171 "data_size": 63488 00:10:17.171 }, 00:10:17.171 { 00:10:17.171 "name": null, 00:10:17.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.171 "is_configured": false, 00:10:17.171 "data_offset": 0, 00:10:17.171 "data_size": 63488 00:10:17.171 }, 00:10:17.171 { 00:10:17.171 "name": null, 00:10:17.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.171 "is_configured": false, 00:10:17.171 "data_offset": 2048, 00:10:17.171 "data_size": 63488 00:10:17.171 } 00:10:17.171 ] 00:10:17.171 }' 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.171 03:59:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.741 [2024-11-18 03:59:14.094059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.741 [2024-11-18 03:59:14.094248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.741 [2024-11-18 03:59:14.094286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:17.741 [2024-11-18 03:59:14.094331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.741 [2024-11-18 03:59:14.094910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.741 [2024-11-18 03:59:14.094978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.741 [2024-11-18 03:59:14.095112] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:17.741 [2024-11-18 03:59:14.095188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.741 pt2 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.741 [2024-11-18 03:59:14.105978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.741 [2024-11-18 03:59:14.106066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.741 [2024-11-18 03:59:14.106101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:17.741 [2024-11-18 03:59:14.106147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.741 [2024-11-18 03:59:14.106542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.741 [2024-11-18 03:59:14.106606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.741 [2024-11-18 03:59:14.106692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:17.741 [2024-11-18 03:59:14.106740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.741 [2024-11-18 03:59:14.106921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:17.741 [2024-11-18 03:59:14.106963] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.741 [2024-11-18 03:59:14.107239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:17.741 [2024-11-18 03:59:14.107444] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:17.741 [2024-11-18 03:59:14.107504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:17.741 [2024-11-18 03:59:14.107694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.741 pt3 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.741 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.741 "name": "raid_bdev1", 00:10:17.742 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:17.742 "strip_size_kb": 0, 00:10:17.742 "state": "online", 00:10:17.742 "raid_level": "raid1", 00:10:17.742 "superblock": true, 00:10:17.742 "num_base_bdevs": 3, 00:10:17.742 "num_base_bdevs_discovered": 3, 00:10:17.742 "num_base_bdevs_operational": 3, 00:10:17.742 "base_bdevs_list": [ 00:10:17.742 { 00:10:17.742 "name": "pt1", 00:10:17.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.742 "is_configured": true, 00:10:17.742 "data_offset": 2048, 00:10:17.742 "data_size": 63488 00:10:17.742 }, 00:10:17.742 { 00:10:17.742 "name": "pt2", 00:10:17.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.742 "is_configured": true, 00:10:17.742 "data_offset": 2048, 00:10:17.742 "data_size": 63488 00:10:17.742 }, 00:10:17.742 { 00:10:17.742 "name": "pt3", 00:10:17.742 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.742 "is_configured": true, 00:10:17.742 "data_offset": 2048, 00:10:17.742 "data_size": 63488 00:10:17.742 } 00:10:17.742 ] 00:10:17.742 }' 00:10:17.742 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.742 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:18.002 [2024-11-18 03:59:14.589599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:18.002 "name": "raid_bdev1", 00:10:18.002 "aliases": [ 00:10:18.002 "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88" 00:10:18.002 ], 00:10:18.002 "product_name": "Raid Volume", 00:10:18.002 "block_size": 512, 00:10:18.002 "num_blocks": 63488, 00:10:18.002 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:18.002 "assigned_rate_limits": { 00:10:18.002 "rw_ios_per_sec": 0, 00:10:18.002 "rw_mbytes_per_sec": 0, 00:10:18.002 "r_mbytes_per_sec": 0, 00:10:18.002 "w_mbytes_per_sec": 0 00:10:18.002 }, 00:10:18.002 "claimed": false, 00:10:18.002 "zoned": false, 00:10:18.002 "supported_io_types": { 00:10:18.002 "read": true, 00:10:18.002 "write": true, 00:10:18.002 "unmap": false, 00:10:18.002 "flush": false, 00:10:18.002 "reset": true, 00:10:18.002 "nvme_admin": false, 00:10:18.002 "nvme_io": false, 00:10:18.002 "nvme_io_md": false, 00:10:18.002 "write_zeroes": true, 00:10:18.002 "zcopy": false, 00:10:18.002 "get_zone_info": false, 00:10:18.002 "zone_management": false, 00:10:18.002 "zone_append": false, 00:10:18.002 "compare": false, 00:10:18.002 "compare_and_write": false, 00:10:18.002 "abort": false, 00:10:18.002 "seek_hole": false, 00:10:18.002 "seek_data": false, 00:10:18.002 "copy": false, 00:10:18.002 "nvme_iov_md": false 00:10:18.002 }, 00:10:18.002 "memory_domains": [ 00:10:18.002 { 00:10:18.002 "dma_device_id": "system", 00:10:18.002 "dma_device_type": 1 00:10:18.002 }, 00:10:18.002 { 00:10:18.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.002 "dma_device_type": 2 00:10:18.002 }, 00:10:18.002 { 00:10:18.002 "dma_device_id": "system", 00:10:18.002 "dma_device_type": 1 00:10:18.002 }, 00:10:18.002 { 00:10:18.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.002 "dma_device_type": 2 00:10:18.002 }, 00:10:18.002 { 00:10:18.002 "dma_device_id": "system", 00:10:18.002 "dma_device_type": 1 00:10:18.002 }, 00:10:18.002 { 00:10:18.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.002 "dma_device_type": 2 00:10:18.002 } 00:10:18.002 ], 00:10:18.002 "driver_specific": { 00:10:18.002 "raid": { 00:10:18.002 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:18.002 "strip_size_kb": 0, 00:10:18.002 "state": "online", 00:10:18.002 "raid_level": "raid1", 00:10:18.002 "superblock": true, 00:10:18.002 "num_base_bdevs": 3, 00:10:18.002 "num_base_bdevs_discovered": 3, 00:10:18.002 "num_base_bdevs_operational": 3, 00:10:18.002 "base_bdevs_list": [ 00:10:18.002 { 00:10:18.002 "name": "pt1", 00:10:18.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.002 "is_configured": true, 00:10:18.002 "data_offset": 2048, 00:10:18.002 "data_size": 63488 00:10:18.002 }, 00:10:18.002 { 00:10:18.002 "name": "pt2", 00:10:18.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.002 "is_configured": true, 00:10:18.002 "data_offset": 2048, 00:10:18.002 "data_size": 63488 00:10:18.002 }, 00:10:18.002 { 00:10:18.002 "name": "pt3", 00:10:18.002 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.002 "is_configured": true, 00:10:18.002 "data_offset": 2048, 00:10:18.002 "data_size": 63488 00:10:18.002 } 00:10:18.002 ] 00:10:18.002 } 00:10:18.002 } 00:10:18.002 }' 00:10:18.002 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:18.262 pt2 00:10:18.262 pt3' 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.262 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.263 [2024-11-18 03:59:14.841050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a35a9ac2-ca93-4fac-a3d9-8a74497f1c88 '!=' a35a9ac2-ca93-4fac-a3d9-8a74497f1c88 ']' 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.263 [2024-11-18 03:59:14.884775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.263 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.522 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.522 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.522 "name": "raid_bdev1", 00:10:18.522 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:18.522 "strip_size_kb": 0, 00:10:18.522 "state": "online", 00:10:18.522 "raid_level": "raid1", 00:10:18.522 "superblock": true, 00:10:18.522 "num_base_bdevs": 3, 00:10:18.522 "num_base_bdevs_discovered": 2, 00:10:18.522 "num_base_bdevs_operational": 2, 00:10:18.522 "base_bdevs_list": [ 00:10:18.522 { 00:10:18.522 "name": null, 00:10:18.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.522 "is_configured": false, 00:10:18.522 "data_offset": 0, 00:10:18.522 "data_size": 63488 00:10:18.522 }, 00:10:18.522 { 00:10:18.522 "name": "pt2", 00:10:18.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.522 "is_configured": true, 00:10:18.522 "data_offset": 2048, 00:10:18.522 "data_size": 63488 00:10:18.522 }, 00:10:18.522 { 00:10:18.522 "name": "pt3", 00:10:18.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.522 "is_configured": true, 00:10:18.522 "data_offset": 2048, 00:10:18.522 "data_size": 63488 00:10:18.522 } 00:10:18.522 ] 00:10:18.522 }' 00:10:18.522 03:59:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.522 03:59:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.782 [2024-11-18 03:59:15.348001] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.782 [2024-11-18 03:59:15.348136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.782 [2024-11-18 03:59:15.348257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.782 [2024-11-18 03:59:15.348363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.782 [2024-11-18 03:59:15.348413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.782 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.783 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.783 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:18.783 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:18.783 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:18.783 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.783 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.042 [2024-11-18 03:59:15.427858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.042 [2024-11-18 03:59:15.427927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.042 [2024-11-18 03:59:15.427944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:19.042 [2024-11-18 03:59:15.427956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.042 [2024-11-18 03:59:15.430549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.042 [2024-11-18 03:59:15.430588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.042 [2024-11-18 03:59:15.430671] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.042 [2024-11-18 03:59:15.430728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.042 pt2 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.042 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.043 "name": "raid_bdev1", 00:10:19.043 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:19.043 "strip_size_kb": 0, 00:10:19.043 "state": "configuring", 00:10:19.043 "raid_level": "raid1", 00:10:19.043 "superblock": true, 00:10:19.043 "num_base_bdevs": 3, 00:10:19.043 "num_base_bdevs_discovered": 1, 00:10:19.043 "num_base_bdevs_operational": 2, 00:10:19.043 "base_bdevs_list": [ 00:10:19.043 { 00:10:19.043 "name": null, 00:10:19.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.043 "is_configured": false, 00:10:19.043 "data_offset": 2048, 00:10:19.043 "data_size": 63488 00:10:19.043 }, 00:10:19.043 { 00:10:19.043 "name": "pt2", 00:10:19.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.043 "is_configured": true, 00:10:19.043 "data_offset": 2048, 00:10:19.043 "data_size": 63488 00:10:19.043 }, 00:10:19.043 { 00:10:19.043 "name": null, 00:10:19.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.043 "is_configured": false, 00:10:19.043 "data_offset": 2048, 00:10:19.043 "data_size": 63488 00:10:19.043 } 00:10:19.043 ] 00:10:19.043 }' 00:10:19.043 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.043 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.303 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:19.303 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:19.303 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:19.303 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:19.303 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.303 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.303 [2024-11-18 03:59:15.879146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:19.303 [2024-11-18 03:59:15.879333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.303 [2024-11-18 03:59:15.879374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:19.303 [2024-11-18 03:59:15.879404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.303 [2024-11-18 03:59:15.880020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.303 [2024-11-18 03:59:15.880086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:19.303 [2024-11-18 03:59:15.880228] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:19.303 [2024-11-18 03:59:15.880292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:19.303 [2024-11-18 03:59:15.880455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:19.303 [2024-11-18 03:59:15.880495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:19.303 [2024-11-18 03:59:15.880805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:19.303 [2024-11-18 03:59:15.881044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:19.303 [2024-11-18 03:59:15.881083] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:19.304 [2024-11-18 03:59:15.881271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.304 pt3 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.304 "name": "raid_bdev1", 00:10:19.304 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:19.304 "strip_size_kb": 0, 00:10:19.304 "state": "online", 00:10:19.304 "raid_level": "raid1", 00:10:19.304 "superblock": true, 00:10:19.304 "num_base_bdevs": 3, 00:10:19.304 "num_base_bdevs_discovered": 2, 00:10:19.304 "num_base_bdevs_operational": 2, 00:10:19.304 "base_bdevs_list": [ 00:10:19.304 { 00:10:19.304 "name": null, 00:10:19.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.304 "is_configured": false, 00:10:19.304 "data_offset": 2048, 00:10:19.304 "data_size": 63488 00:10:19.304 }, 00:10:19.304 { 00:10:19.304 "name": "pt2", 00:10:19.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.304 "is_configured": true, 00:10:19.304 "data_offset": 2048, 00:10:19.304 "data_size": 63488 00:10:19.304 }, 00:10:19.304 { 00:10:19.304 "name": "pt3", 00:10:19.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.304 "is_configured": true, 00:10:19.304 "data_offset": 2048, 00:10:19.304 "data_size": 63488 00:10:19.304 } 00:10:19.304 ] 00:10:19.304 }' 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.304 03:59:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 [2024-11-18 03:59:16.330368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.874 [2024-11-18 03:59:16.330421] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.874 [2024-11-18 03:59:16.330522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.874 [2024-11-18 03:59:16.330592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.874 [2024-11-18 03:59:16.330602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 [2024-11-18 03:59:16.406216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:19.874 [2024-11-18 03:59:16.406364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.874 [2024-11-18 03:59:16.406406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:19.874 [2024-11-18 03:59:16.406417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.874 [2024-11-18 03:59:16.409121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.874 [2024-11-18 03:59:16.409195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:19.874 [2024-11-18 03:59:16.409295] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:19.874 [2024-11-18 03:59:16.409345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:19.874 [2024-11-18 03:59:16.409481] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:19.874 [2024-11-18 03:59:16.409492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.874 [2024-11-18 03:59:16.409510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:19.874 [2024-11-18 03:59:16.409563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.874 pt1 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.874 "name": "raid_bdev1", 00:10:19.874 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:19.874 "strip_size_kb": 0, 00:10:19.874 "state": "configuring", 00:10:19.874 "raid_level": "raid1", 00:10:19.874 "superblock": true, 00:10:19.874 "num_base_bdevs": 3, 00:10:19.874 "num_base_bdevs_discovered": 1, 00:10:19.874 "num_base_bdevs_operational": 2, 00:10:19.874 "base_bdevs_list": [ 00:10:19.874 { 00:10:19.874 "name": null, 00:10:19.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.874 "is_configured": false, 00:10:19.874 "data_offset": 2048, 00:10:19.874 "data_size": 63488 00:10:19.874 }, 00:10:19.874 { 00:10:19.874 "name": "pt2", 00:10:19.874 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.874 "is_configured": true, 00:10:19.874 "data_offset": 2048, 00:10:19.874 "data_size": 63488 00:10:19.874 }, 00:10:19.874 { 00:10:19.874 "name": null, 00:10:19.874 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.874 "is_configured": false, 00:10:19.874 "data_offset": 2048, 00:10:19.874 "data_size": 63488 00:10:19.874 } 00:10:19.874 ] 00:10:19.874 }' 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.874 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.444 [2024-11-18 03:59:16.917363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.444 [2024-11-18 03:59:16.917531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.444 [2024-11-18 03:59:16.917571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:20.444 [2024-11-18 03:59:16.917601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.444 [2024-11-18 03:59:16.918168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.444 [2024-11-18 03:59:16.918227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.444 [2024-11-18 03:59:16.918351] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:20.444 [2024-11-18 03:59:16.918437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.444 [2024-11-18 03:59:16.918632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:20.444 [2024-11-18 03:59:16.918667] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.444 [2024-11-18 03:59:16.918980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:20.444 [2024-11-18 03:59:16.919186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:20.444 [2024-11-18 03:59:16.919230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:20.444 [2024-11-18 03:59:16.919418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.444 pt3 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.444 "name": "raid_bdev1", 00:10:20.444 "uuid": "a35a9ac2-ca93-4fac-a3d9-8a74497f1c88", 00:10:20.444 "strip_size_kb": 0, 00:10:20.444 "state": "online", 00:10:20.444 "raid_level": "raid1", 00:10:20.444 "superblock": true, 00:10:20.444 "num_base_bdevs": 3, 00:10:20.444 "num_base_bdevs_discovered": 2, 00:10:20.444 "num_base_bdevs_operational": 2, 00:10:20.444 "base_bdevs_list": [ 00:10:20.444 { 00:10:20.444 "name": null, 00:10:20.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.444 "is_configured": false, 00:10:20.444 "data_offset": 2048, 00:10:20.444 "data_size": 63488 00:10:20.444 }, 00:10:20.444 { 00:10:20.444 "name": "pt2", 00:10:20.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.444 "is_configured": true, 00:10:20.444 "data_offset": 2048, 00:10:20.444 "data_size": 63488 00:10:20.444 }, 00:10:20.444 { 00:10:20.444 "name": "pt3", 00:10:20.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.444 "is_configured": true, 00:10:20.444 "data_offset": 2048, 00:10:20.444 "data_size": 63488 00:10:20.444 } 00:10:20.444 ] 00:10:20.444 }' 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.444 03:59:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.704 03:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:20.704 03:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:20.704 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.704 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.704 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.964 03:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:20.964 03:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:20.964 03:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.964 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.964 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.964 [2024-11-18 03:59:17.364905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.964 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.964 03:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a35a9ac2-ca93-4fac-a3d9-8a74497f1c88 '!=' a35a9ac2-ca93-4fac-a3d9-8a74497f1c88 ']' 00:10:20.964 03:59:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68625 00:10:20.964 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68625 ']' 00:10:20.964 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68625 00:10:20.965 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:20.965 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.965 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68625 00:10:20.965 killing process with pid 68625 00:10:20.965 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.965 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.965 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68625' 00:10:20.965 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68625 00:10:20.965 [2024-11-18 03:59:17.434452] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.965 [2024-11-18 03:59:17.434586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.965 03:59:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68625 00:10:20.965 [2024-11-18 03:59:17.434651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.965 [2024-11-18 03:59:17.434665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:21.225 [2024-11-18 03:59:17.757160] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.606 ************************************ 00:10:22.606 END TEST raid_superblock_test 00:10:22.606 ************************************ 00:10:22.606 03:59:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:22.606 00:10:22.606 real 0m7.754s 00:10:22.606 user 0m11.946s 00:10:22.606 sys 0m1.428s 00:10:22.606 03:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.606 03:59:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.606 03:59:18 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:22.606 03:59:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:22.606 03:59:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.606 03:59:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.606 ************************************ 00:10:22.606 START TEST raid_read_error_test 00:10:22.606 ************************************ 00:10:22.606 03:59:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:22.606 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:22.607 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KDcgRDOWDd 00:10:22.607 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69072 00:10:22.607 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:22.607 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69072 00:10:22.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.607 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69072 ']' 00:10:22.607 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.607 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.607 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.607 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.607 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.607 [2024-11-18 03:59:19.106789] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:22.607 [2024-11-18 03:59:19.106937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69072 ] 00:10:22.866 [2024-11-18 03:59:19.282902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.866 [2024-11-18 03:59:19.424897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.125 [2024-11-18 03:59:19.665696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.125 [2024-11-18 03:59:19.665746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.385 BaseBdev1_malloc 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.385 true 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.385 03:59:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.385 [2024-11-18 03:59:19.998241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:23.385 [2024-11-18 03:59:19.998317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.385 [2024-11-18 03:59:19.998342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:23.385 [2024-11-18 03:59:19.998354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.385 [2024-11-18 03:59:20.000850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.385 [2024-11-18 03:59:20.000888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:23.385 BaseBdev1 00:10:23.385 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.385 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.385 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:23.385 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.385 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.652 BaseBdev2_malloc 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.652 true 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.652 [2024-11-18 03:59:20.071465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:23.652 [2024-11-18 03:59:20.071646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.652 [2024-11-18 03:59:20.071673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:23.652 [2024-11-18 03:59:20.071685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.652 [2024-11-18 03:59:20.074175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.652 [2024-11-18 03:59:20.074217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:23.652 BaseBdev2 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.652 BaseBdev3_malloc 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.652 true 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.652 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.652 [2024-11-18 03:59:20.163921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:23.652 [2024-11-18 03:59:20.163986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.652 [2024-11-18 03:59:20.164005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:23.652 [2024-11-18 03:59:20.164016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.653 [2024-11-18 03:59:20.166391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.653 [2024-11-18 03:59:20.166523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:23.653 BaseBdev3 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.653 [2024-11-18 03:59:20.175974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.653 [2024-11-18 03:59:20.178043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.653 [2024-11-18 03:59:20.178116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.653 [2024-11-18 03:59:20.178316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:23.653 [2024-11-18 03:59:20.178329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:23.653 [2024-11-18 03:59:20.178567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:23.653 [2024-11-18 03:59:20.178741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:23.653 [2024-11-18 03:59:20.178754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:23.653 [2024-11-18 03:59:20.178917] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.653 "name": "raid_bdev1", 00:10:23.653 "uuid": "bcb1b782-1381-4690-b603-2840dcd4a404", 00:10:23.653 "strip_size_kb": 0, 00:10:23.653 "state": "online", 00:10:23.653 "raid_level": "raid1", 00:10:23.653 "superblock": true, 00:10:23.653 "num_base_bdevs": 3, 00:10:23.653 "num_base_bdevs_discovered": 3, 00:10:23.653 "num_base_bdevs_operational": 3, 00:10:23.653 "base_bdevs_list": [ 00:10:23.653 { 00:10:23.653 "name": "BaseBdev1", 00:10:23.653 "uuid": "6e584222-fc5c-5932-8a91-76f93db81ca3", 00:10:23.653 "is_configured": true, 00:10:23.653 "data_offset": 2048, 00:10:23.653 "data_size": 63488 00:10:23.653 }, 00:10:23.653 { 00:10:23.653 "name": "BaseBdev2", 00:10:23.653 "uuid": "2841c8ca-6b88-5366-ac6a-01d1f59d09f9", 00:10:23.653 "is_configured": true, 00:10:23.653 "data_offset": 2048, 00:10:23.653 "data_size": 63488 00:10:23.653 }, 00:10:23.653 { 00:10:23.653 "name": "BaseBdev3", 00:10:23.653 "uuid": "95fd5e8d-7c90-53d1-baab-2caee5c1ecc6", 00:10:23.653 "is_configured": true, 00:10:23.653 "data_offset": 2048, 00:10:23.653 "data_size": 63488 00:10:23.653 } 00:10:23.653 ] 00:10:23.653 }' 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.653 03:59:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.238 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:24.238 03:59:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:24.238 [2024-11-18 03:59:20.748660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:25.177 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.178 "name": "raid_bdev1", 00:10:25.178 "uuid": "bcb1b782-1381-4690-b603-2840dcd4a404", 00:10:25.178 "strip_size_kb": 0, 00:10:25.178 "state": "online", 00:10:25.178 "raid_level": "raid1", 00:10:25.178 "superblock": true, 00:10:25.178 "num_base_bdevs": 3, 00:10:25.178 "num_base_bdevs_discovered": 3, 00:10:25.178 "num_base_bdevs_operational": 3, 00:10:25.178 "base_bdevs_list": [ 00:10:25.178 { 00:10:25.178 "name": "BaseBdev1", 00:10:25.178 "uuid": "6e584222-fc5c-5932-8a91-76f93db81ca3", 00:10:25.178 "is_configured": true, 00:10:25.178 "data_offset": 2048, 00:10:25.178 "data_size": 63488 00:10:25.178 }, 00:10:25.178 { 00:10:25.178 "name": "BaseBdev2", 00:10:25.178 "uuid": "2841c8ca-6b88-5366-ac6a-01d1f59d09f9", 00:10:25.178 "is_configured": true, 00:10:25.178 "data_offset": 2048, 00:10:25.178 "data_size": 63488 00:10:25.178 }, 00:10:25.178 { 00:10:25.178 "name": "BaseBdev3", 00:10:25.178 "uuid": "95fd5e8d-7c90-53d1-baab-2caee5c1ecc6", 00:10:25.178 "is_configured": true, 00:10:25.178 "data_offset": 2048, 00:10:25.178 "data_size": 63488 00:10:25.178 } 00:10:25.178 ] 00:10:25.178 }' 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.178 03:59:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.748 [2024-11-18 03:59:22.095975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.748 [2024-11-18 03:59:22.096103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.748 [2024-11-18 03:59:22.098763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.748 [2024-11-18 03:59:22.098871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.748 [2024-11-18 03:59:22.099004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.748 [2024-11-18 03:59:22.099050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:25.748 { 00:10:25.748 "results": [ 00:10:25.748 { 00:10:25.748 "job": "raid_bdev1", 00:10:25.748 "core_mask": "0x1", 00:10:25.748 "workload": "randrw", 00:10:25.748 "percentage": 50, 00:10:25.748 "status": "finished", 00:10:25.748 "queue_depth": 1, 00:10:25.748 "io_size": 131072, 00:10:25.748 "runtime": 1.34785, 00:10:25.748 "iops": 10394.331713469599, 00:10:25.748 "mibps": 1299.2914641836999, 00:10:25.748 "io_failed": 0, 00:10:25.748 "io_timeout": 0, 00:10:25.748 "avg_latency_us": 93.6084077187536, 00:10:25.748 "min_latency_us": 23.14061135371179, 00:10:25.748 "max_latency_us": 1445.2262008733624 00:10:25.748 } 00:10:25.748 ], 00:10:25.748 "core_count": 1 00:10:25.748 } 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69072 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69072 ']' 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69072 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69072 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.748 killing process with pid 69072 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69072' 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69072 00:10:25.748 [2024-11-18 03:59:22.141295] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.748 03:59:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69072 00:10:26.008 [2024-11-18 03:59:22.393340] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.388 03:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KDcgRDOWDd 00:10:27.388 03:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:27.388 03:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:27.388 03:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:27.388 03:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:27.388 03:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.388 03:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:27.388 03:59:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:27.388 00:10:27.388 real 0m4.678s 00:10:27.388 user 0m5.433s 00:10:27.388 sys 0m0.645s 00:10:27.388 03:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.389 03:59:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.389 ************************************ 00:10:27.389 END TEST raid_read_error_test 00:10:27.389 ************************************ 00:10:27.389 03:59:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:27.389 03:59:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:27.389 03:59:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.389 03:59:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.389 ************************************ 00:10:27.389 START TEST raid_write_error_test 00:10:27.389 ************************************ 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.48c2tZOvn7 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69217 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69217 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69217 ']' 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.389 03:59:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.389 [2024-11-18 03:59:23.856208] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:27.389 [2024-11-18 03:59:23.856372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69217 ] 00:10:27.648 [2024-11-18 03:59:24.034852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.648 [2024-11-18 03:59:24.170148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.908 [2024-11-18 03:59:24.409170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.908 [2024-11-18 03:59:24.409234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.167 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.168 BaseBdev1_malloc 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.168 true 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.168 [2024-11-18 03:59:24.760262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:28.168 [2024-11-18 03:59:24.760328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.168 [2024-11-18 03:59:24.760350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:28.168 [2024-11-18 03:59:24.760362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.168 [2024-11-18 03:59:24.762727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.168 [2024-11-18 03:59:24.762764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:28.168 BaseBdev1 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.168 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.428 BaseBdev2_malloc 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.428 true 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.428 [2024-11-18 03:59:24.835983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:28.428 [2024-11-18 03:59:24.836049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.428 [2024-11-18 03:59:24.836067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:28.428 [2024-11-18 03:59:24.836080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.428 [2024-11-18 03:59:24.838490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.428 [2024-11-18 03:59:24.838527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:28.428 BaseBdev2 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.428 BaseBdev3_malloc 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.428 true 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.428 [2024-11-18 03:59:24.921473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:28.428 [2024-11-18 03:59:24.921534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.428 [2024-11-18 03:59:24.921553] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:28.428 [2024-11-18 03:59:24.921565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.428 [2024-11-18 03:59:24.924132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.428 [2024-11-18 03:59:24.924176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:28.428 BaseBdev3 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.428 [2024-11-18 03:59:24.933530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.428 [2024-11-18 03:59:24.935636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.428 [2024-11-18 03:59:24.935714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.428 [2024-11-18 03:59:24.935932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:28.428 [2024-11-18 03:59:24.935951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.428 [2024-11-18 03:59:24.936208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:28.428 [2024-11-18 03:59:24.936385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:28.428 [2024-11-18 03:59:24.936403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:28.428 [2024-11-18 03:59:24.936561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.428 "name": "raid_bdev1", 00:10:28.428 "uuid": "0fa456ca-514d-4703-856f-13ed9a65f8a2", 00:10:28.428 "strip_size_kb": 0, 00:10:28.428 "state": "online", 00:10:28.428 "raid_level": "raid1", 00:10:28.428 "superblock": true, 00:10:28.428 "num_base_bdevs": 3, 00:10:28.428 "num_base_bdevs_discovered": 3, 00:10:28.428 "num_base_bdevs_operational": 3, 00:10:28.428 "base_bdevs_list": [ 00:10:28.428 { 00:10:28.428 "name": "BaseBdev1", 00:10:28.428 "uuid": "09a4de0a-d106-5d0f-916a-174e111579cb", 00:10:28.428 "is_configured": true, 00:10:28.428 "data_offset": 2048, 00:10:28.428 "data_size": 63488 00:10:28.428 }, 00:10:28.428 { 00:10:28.428 "name": "BaseBdev2", 00:10:28.428 "uuid": "b5c05d09-fbab-5895-bdd4-8dae6ca7b3d0", 00:10:28.428 "is_configured": true, 00:10:28.428 "data_offset": 2048, 00:10:28.428 "data_size": 63488 00:10:28.428 }, 00:10:28.428 { 00:10:28.428 "name": "BaseBdev3", 00:10:28.428 "uuid": "35c84fdb-d681-5a7d-a310-1d4b6f673b71", 00:10:28.428 "is_configured": true, 00:10:28.428 "data_offset": 2048, 00:10:28.428 "data_size": 63488 00:10:28.428 } 00:10:28.428 ] 00:10:28.428 }' 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.428 03:59:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.998 03:59:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:28.998 03:59:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:28.998 [2024-11-18 03:59:25.486096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:29.938 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.939 [2024-11-18 03:59:26.405115] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:29.939 [2024-11-18 03:59:26.405182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.939 [2024-11-18 03:59:26.405417] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.939 "name": "raid_bdev1", 00:10:29.939 "uuid": "0fa456ca-514d-4703-856f-13ed9a65f8a2", 00:10:29.939 "strip_size_kb": 0, 00:10:29.939 "state": "online", 00:10:29.939 "raid_level": "raid1", 00:10:29.939 "superblock": true, 00:10:29.939 "num_base_bdevs": 3, 00:10:29.939 "num_base_bdevs_discovered": 2, 00:10:29.939 "num_base_bdevs_operational": 2, 00:10:29.939 "base_bdevs_list": [ 00:10:29.939 { 00:10:29.939 "name": null, 00:10:29.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.939 "is_configured": false, 00:10:29.939 "data_offset": 0, 00:10:29.939 "data_size": 63488 00:10:29.939 }, 00:10:29.939 { 00:10:29.939 "name": "BaseBdev2", 00:10:29.939 "uuid": "b5c05d09-fbab-5895-bdd4-8dae6ca7b3d0", 00:10:29.939 "is_configured": true, 00:10:29.939 "data_offset": 2048, 00:10:29.939 "data_size": 63488 00:10:29.939 }, 00:10:29.939 { 00:10:29.939 "name": "BaseBdev3", 00:10:29.939 "uuid": "35c84fdb-d681-5a7d-a310-1d4b6f673b71", 00:10:29.939 "is_configured": true, 00:10:29.939 "data_offset": 2048, 00:10:29.939 "data_size": 63488 00:10:29.939 } 00:10:29.939 ] 00:10:29.939 }' 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.939 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.529 [2024-11-18 03:59:26.876605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.529 [2024-11-18 03:59:26.876657] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.529 [2024-11-18 03:59:26.879446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.529 [2024-11-18 03:59:26.879527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.529 [2024-11-18 03:59:26.879641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.529 [2024-11-18 03:59:26.879661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:30.529 { 00:10:30.529 "results": [ 00:10:30.529 { 00:10:30.529 "job": "raid_bdev1", 00:10:30.529 "core_mask": "0x1", 00:10:30.529 "workload": "randrw", 00:10:30.529 "percentage": 50, 00:10:30.529 "status": "finished", 00:10:30.529 "queue_depth": 1, 00:10:30.529 "io_size": 131072, 00:10:30.529 "runtime": 1.391141, 00:10:30.529 "iops": 11613.488496133748, 00:10:30.529 "mibps": 1451.6860620167186, 00:10:30.529 "io_failed": 0, 00:10:30.529 "io_timeout": 0, 00:10:30.529 "avg_latency_us": 83.53000515714146, 00:10:30.529 "min_latency_us": 23.02882096069869, 00:10:30.529 "max_latency_us": 1337.907423580786 00:10:30.529 } 00:10:30.529 ], 00:10:30.529 "core_count": 1 00:10:30.529 } 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69217 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69217 ']' 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69217 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69217 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.529 killing process with pid 69217 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69217' 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69217 00:10:30.529 [2024-11-18 03:59:26.925392] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.529 03:59:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69217 00:10:30.789 [2024-11-18 03:59:27.177753] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.171 03:59:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.48c2tZOvn7 00:10:32.171 03:59:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:32.171 03:59:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:32.171 03:59:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:32.171 03:59:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:32.171 03:59:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.171 03:59:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:32.171 03:59:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:32.171 00:10:32.171 real 0m4.701s 00:10:32.171 user 0m5.498s 00:10:32.171 sys 0m0.645s 00:10:32.171 03:59:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.171 03:59:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.171 ************************************ 00:10:32.171 END TEST raid_write_error_test 00:10:32.171 ************************************ 00:10:32.171 03:59:28 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:32.171 03:59:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:32.171 03:59:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:32.171 03:59:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:32.171 03:59:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.171 03:59:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.171 ************************************ 00:10:32.171 START TEST raid_state_function_test 00:10:32.171 ************************************ 00:10:32.171 03:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:32.171 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:32.172 Process raid pid: 69356 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69356 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69356' 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69356 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69356 ']' 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.172 03:59:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.172 [2024-11-18 03:59:28.611572] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:32.172 [2024-11-18 03:59:28.611756] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.172 [2024-11-18 03:59:28.787615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.432 [2024-11-18 03:59:28.926702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.689 [2024-11-18 03:59:29.165356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.689 [2024-11-18 03:59:29.165537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.948 [2024-11-18 03:59:29.462794] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.948 [2024-11-18 03:59:29.462960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.948 [2024-11-18 03:59:29.462993] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.948 [2024-11-18 03:59:29.463018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.948 [2024-11-18 03:59:29.463037] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.948 [2024-11-18 03:59:29.463058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.948 [2024-11-18 03:59:29.463076] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:32.948 [2024-11-18 03:59:29.463097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.948 "name": "Existed_Raid", 00:10:32.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.948 "strip_size_kb": 64, 00:10:32.948 "state": "configuring", 00:10:32.948 "raid_level": "raid0", 00:10:32.948 "superblock": false, 00:10:32.948 "num_base_bdevs": 4, 00:10:32.948 "num_base_bdevs_discovered": 0, 00:10:32.948 "num_base_bdevs_operational": 4, 00:10:32.948 "base_bdevs_list": [ 00:10:32.948 { 00:10:32.948 "name": "BaseBdev1", 00:10:32.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.948 "is_configured": false, 00:10:32.948 "data_offset": 0, 00:10:32.948 "data_size": 0 00:10:32.948 }, 00:10:32.948 { 00:10:32.948 "name": "BaseBdev2", 00:10:32.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.948 "is_configured": false, 00:10:32.948 "data_offset": 0, 00:10:32.948 "data_size": 0 00:10:32.948 }, 00:10:32.948 { 00:10:32.948 "name": "BaseBdev3", 00:10:32.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.948 "is_configured": false, 00:10:32.948 "data_offset": 0, 00:10:32.948 "data_size": 0 00:10:32.948 }, 00:10:32.948 { 00:10:32.948 "name": "BaseBdev4", 00:10:32.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.948 "is_configured": false, 00:10:32.948 "data_offset": 0, 00:10:32.948 "data_size": 0 00:10:32.948 } 00:10:32.948 ] 00:10:32.948 }' 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.948 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.517 [2024-11-18 03:59:29.929934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.517 [2024-11-18 03:59:29.930067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.517 [2024-11-18 03:59:29.941864] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.517 [2024-11-18 03:59:29.941950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.517 [2024-11-18 03:59:29.941980] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.517 [2024-11-18 03:59:29.942003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.517 [2024-11-18 03:59:29.942020] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.517 [2024-11-18 03:59:29.942041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.517 [2024-11-18 03:59:29.942058] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:33.517 [2024-11-18 03:59:29.942078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.517 [2024-11-18 03:59:29.993611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.517 BaseBdev1 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.517 03:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.517 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.517 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:33.517 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.517 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.517 [ 00:10:33.517 { 00:10:33.517 "name": "BaseBdev1", 00:10:33.517 "aliases": [ 00:10:33.517 "12f9c50f-1db8-47f1-a07e-abe3db614053" 00:10:33.517 ], 00:10:33.517 "product_name": "Malloc disk", 00:10:33.517 "block_size": 512, 00:10:33.517 "num_blocks": 65536, 00:10:33.517 "uuid": "12f9c50f-1db8-47f1-a07e-abe3db614053", 00:10:33.517 "assigned_rate_limits": { 00:10:33.517 "rw_ios_per_sec": 0, 00:10:33.517 "rw_mbytes_per_sec": 0, 00:10:33.517 "r_mbytes_per_sec": 0, 00:10:33.517 "w_mbytes_per_sec": 0 00:10:33.517 }, 00:10:33.517 "claimed": true, 00:10:33.517 "claim_type": "exclusive_write", 00:10:33.517 "zoned": false, 00:10:33.517 "supported_io_types": { 00:10:33.517 "read": true, 00:10:33.517 "write": true, 00:10:33.517 "unmap": true, 00:10:33.517 "flush": true, 00:10:33.517 "reset": true, 00:10:33.517 "nvme_admin": false, 00:10:33.517 "nvme_io": false, 00:10:33.517 "nvme_io_md": false, 00:10:33.517 "write_zeroes": true, 00:10:33.517 "zcopy": true, 00:10:33.517 "get_zone_info": false, 00:10:33.517 "zone_management": false, 00:10:33.517 "zone_append": false, 00:10:33.517 "compare": false, 00:10:33.517 "compare_and_write": false, 00:10:33.517 "abort": true, 00:10:33.517 "seek_hole": false, 00:10:33.517 "seek_data": false, 00:10:33.517 "copy": true, 00:10:33.517 "nvme_iov_md": false 00:10:33.517 }, 00:10:33.517 "memory_domains": [ 00:10:33.517 { 00:10:33.518 "dma_device_id": "system", 00:10:33.518 "dma_device_type": 1 00:10:33.518 }, 00:10:33.518 { 00:10:33.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.518 "dma_device_type": 2 00:10:33.518 } 00:10:33.518 ], 00:10:33.518 "driver_specific": {} 00:10:33.518 } 00:10:33.518 ] 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.518 "name": "Existed_Raid", 00:10:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.518 "strip_size_kb": 64, 00:10:33.518 "state": "configuring", 00:10:33.518 "raid_level": "raid0", 00:10:33.518 "superblock": false, 00:10:33.518 "num_base_bdevs": 4, 00:10:33.518 "num_base_bdevs_discovered": 1, 00:10:33.518 "num_base_bdevs_operational": 4, 00:10:33.518 "base_bdevs_list": [ 00:10:33.518 { 00:10:33.518 "name": "BaseBdev1", 00:10:33.518 "uuid": "12f9c50f-1db8-47f1-a07e-abe3db614053", 00:10:33.518 "is_configured": true, 00:10:33.518 "data_offset": 0, 00:10:33.518 "data_size": 65536 00:10:33.518 }, 00:10:33.518 { 00:10:33.518 "name": "BaseBdev2", 00:10:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.518 "is_configured": false, 00:10:33.518 "data_offset": 0, 00:10:33.518 "data_size": 0 00:10:33.518 }, 00:10:33.518 { 00:10:33.518 "name": "BaseBdev3", 00:10:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.518 "is_configured": false, 00:10:33.518 "data_offset": 0, 00:10:33.518 "data_size": 0 00:10:33.518 }, 00:10:33.518 { 00:10:33.518 "name": "BaseBdev4", 00:10:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.518 "is_configured": false, 00:10:33.518 "data_offset": 0, 00:10:33.518 "data_size": 0 00:10:33.518 } 00:10:33.518 ] 00:10:33.518 }' 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.518 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.087 [2024-11-18 03:59:30.468873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.087 [2024-11-18 03:59:30.469019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.087 [2024-11-18 03:59:30.480905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.087 [2024-11-18 03:59:30.483010] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.087 [2024-11-18 03:59:30.483086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.087 [2024-11-18 03:59:30.483117] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.087 [2024-11-18 03:59:30.483141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.087 [2024-11-18 03:59:30.483158] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.087 [2024-11-18 03:59:30.483178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.087 "name": "Existed_Raid", 00:10:34.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.087 "strip_size_kb": 64, 00:10:34.087 "state": "configuring", 00:10:34.087 "raid_level": "raid0", 00:10:34.087 "superblock": false, 00:10:34.087 "num_base_bdevs": 4, 00:10:34.087 "num_base_bdevs_discovered": 1, 00:10:34.087 "num_base_bdevs_operational": 4, 00:10:34.087 "base_bdevs_list": [ 00:10:34.087 { 00:10:34.087 "name": "BaseBdev1", 00:10:34.087 "uuid": "12f9c50f-1db8-47f1-a07e-abe3db614053", 00:10:34.087 "is_configured": true, 00:10:34.087 "data_offset": 0, 00:10:34.087 "data_size": 65536 00:10:34.087 }, 00:10:34.087 { 00:10:34.087 "name": "BaseBdev2", 00:10:34.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.087 "is_configured": false, 00:10:34.087 "data_offset": 0, 00:10:34.087 "data_size": 0 00:10:34.087 }, 00:10:34.087 { 00:10:34.087 "name": "BaseBdev3", 00:10:34.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.087 "is_configured": false, 00:10:34.087 "data_offset": 0, 00:10:34.087 "data_size": 0 00:10:34.087 }, 00:10:34.087 { 00:10:34.087 "name": "BaseBdev4", 00:10:34.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.087 "is_configured": false, 00:10:34.087 "data_offset": 0, 00:10:34.087 "data_size": 0 00:10:34.087 } 00:10:34.087 ] 00:10:34.087 }' 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.087 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.348 [2024-11-18 03:59:30.929105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.348 BaseBdev2 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.348 [ 00:10:34.348 { 00:10:34.348 "name": "BaseBdev2", 00:10:34.348 "aliases": [ 00:10:34.348 "0547fffb-a8ae-4c43-b71a-fef7cbd78981" 00:10:34.348 ], 00:10:34.348 "product_name": "Malloc disk", 00:10:34.348 "block_size": 512, 00:10:34.348 "num_blocks": 65536, 00:10:34.348 "uuid": "0547fffb-a8ae-4c43-b71a-fef7cbd78981", 00:10:34.348 "assigned_rate_limits": { 00:10:34.348 "rw_ios_per_sec": 0, 00:10:34.348 "rw_mbytes_per_sec": 0, 00:10:34.348 "r_mbytes_per_sec": 0, 00:10:34.348 "w_mbytes_per_sec": 0 00:10:34.348 }, 00:10:34.348 "claimed": true, 00:10:34.348 "claim_type": "exclusive_write", 00:10:34.348 "zoned": false, 00:10:34.348 "supported_io_types": { 00:10:34.348 "read": true, 00:10:34.348 "write": true, 00:10:34.348 "unmap": true, 00:10:34.348 "flush": true, 00:10:34.348 "reset": true, 00:10:34.348 "nvme_admin": false, 00:10:34.348 "nvme_io": false, 00:10:34.348 "nvme_io_md": false, 00:10:34.348 "write_zeroes": true, 00:10:34.348 "zcopy": true, 00:10:34.348 "get_zone_info": false, 00:10:34.348 "zone_management": false, 00:10:34.348 "zone_append": false, 00:10:34.348 "compare": false, 00:10:34.348 "compare_and_write": false, 00:10:34.348 "abort": true, 00:10:34.348 "seek_hole": false, 00:10:34.348 "seek_data": false, 00:10:34.348 "copy": true, 00:10:34.348 "nvme_iov_md": false 00:10:34.348 }, 00:10:34.348 "memory_domains": [ 00:10:34.348 { 00:10:34.348 "dma_device_id": "system", 00:10:34.348 "dma_device_type": 1 00:10:34.348 }, 00:10:34.348 { 00:10:34.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.348 "dma_device_type": 2 00:10:34.348 } 00:10:34.348 ], 00:10:34.348 "driver_specific": {} 00:10:34.348 } 00:10:34.348 ] 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.348 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.349 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.349 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.349 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.349 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.349 03:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.349 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.349 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.609 03:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.609 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.609 "name": "Existed_Raid", 00:10:34.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.609 "strip_size_kb": 64, 00:10:34.609 "state": "configuring", 00:10:34.609 "raid_level": "raid0", 00:10:34.609 "superblock": false, 00:10:34.609 "num_base_bdevs": 4, 00:10:34.609 "num_base_bdevs_discovered": 2, 00:10:34.609 "num_base_bdevs_operational": 4, 00:10:34.609 "base_bdevs_list": [ 00:10:34.609 { 00:10:34.609 "name": "BaseBdev1", 00:10:34.609 "uuid": "12f9c50f-1db8-47f1-a07e-abe3db614053", 00:10:34.609 "is_configured": true, 00:10:34.609 "data_offset": 0, 00:10:34.609 "data_size": 65536 00:10:34.609 }, 00:10:34.609 { 00:10:34.609 "name": "BaseBdev2", 00:10:34.609 "uuid": "0547fffb-a8ae-4c43-b71a-fef7cbd78981", 00:10:34.609 "is_configured": true, 00:10:34.609 "data_offset": 0, 00:10:34.609 "data_size": 65536 00:10:34.609 }, 00:10:34.609 { 00:10:34.609 "name": "BaseBdev3", 00:10:34.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.609 "is_configured": false, 00:10:34.609 "data_offset": 0, 00:10:34.609 "data_size": 0 00:10:34.609 }, 00:10:34.609 { 00:10:34.609 "name": "BaseBdev4", 00:10:34.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.609 "is_configured": false, 00:10:34.609 "data_offset": 0, 00:10:34.609 "data_size": 0 00:10:34.609 } 00:10:34.609 ] 00:10:34.609 }' 00:10:34.609 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.609 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.869 [2024-11-18 03:59:31.426339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.869 BaseBdev3 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.869 [ 00:10:34.869 { 00:10:34.869 "name": "BaseBdev3", 00:10:34.869 "aliases": [ 00:10:34.869 "4b6c0258-0d72-4e06-9798-46c26f228a43" 00:10:34.869 ], 00:10:34.869 "product_name": "Malloc disk", 00:10:34.869 "block_size": 512, 00:10:34.869 "num_blocks": 65536, 00:10:34.869 "uuid": "4b6c0258-0d72-4e06-9798-46c26f228a43", 00:10:34.869 "assigned_rate_limits": { 00:10:34.869 "rw_ios_per_sec": 0, 00:10:34.869 "rw_mbytes_per_sec": 0, 00:10:34.869 "r_mbytes_per_sec": 0, 00:10:34.869 "w_mbytes_per_sec": 0 00:10:34.869 }, 00:10:34.869 "claimed": true, 00:10:34.869 "claim_type": "exclusive_write", 00:10:34.869 "zoned": false, 00:10:34.869 "supported_io_types": { 00:10:34.869 "read": true, 00:10:34.869 "write": true, 00:10:34.869 "unmap": true, 00:10:34.869 "flush": true, 00:10:34.869 "reset": true, 00:10:34.869 "nvme_admin": false, 00:10:34.869 "nvme_io": false, 00:10:34.869 "nvme_io_md": false, 00:10:34.869 "write_zeroes": true, 00:10:34.869 "zcopy": true, 00:10:34.869 "get_zone_info": false, 00:10:34.869 "zone_management": false, 00:10:34.869 "zone_append": false, 00:10:34.869 "compare": false, 00:10:34.869 "compare_and_write": false, 00:10:34.869 "abort": true, 00:10:34.869 "seek_hole": false, 00:10:34.869 "seek_data": false, 00:10:34.869 "copy": true, 00:10:34.869 "nvme_iov_md": false 00:10:34.869 }, 00:10:34.869 "memory_domains": [ 00:10:34.869 { 00:10:34.869 "dma_device_id": "system", 00:10:34.869 "dma_device_type": 1 00:10:34.869 }, 00:10:34.869 { 00:10:34.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.869 "dma_device_type": 2 00:10:34.869 } 00:10:34.869 ], 00:10:34.869 "driver_specific": {} 00:10:34.869 } 00:10:34.869 ] 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.869 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.128 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.128 "name": "Existed_Raid", 00:10:35.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.128 "strip_size_kb": 64, 00:10:35.128 "state": "configuring", 00:10:35.128 "raid_level": "raid0", 00:10:35.128 "superblock": false, 00:10:35.128 "num_base_bdevs": 4, 00:10:35.128 "num_base_bdevs_discovered": 3, 00:10:35.128 "num_base_bdevs_operational": 4, 00:10:35.128 "base_bdevs_list": [ 00:10:35.128 { 00:10:35.128 "name": "BaseBdev1", 00:10:35.128 "uuid": "12f9c50f-1db8-47f1-a07e-abe3db614053", 00:10:35.128 "is_configured": true, 00:10:35.128 "data_offset": 0, 00:10:35.128 "data_size": 65536 00:10:35.128 }, 00:10:35.128 { 00:10:35.128 "name": "BaseBdev2", 00:10:35.128 "uuid": "0547fffb-a8ae-4c43-b71a-fef7cbd78981", 00:10:35.128 "is_configured": true, 00:10:35.128 "data_offset": 0, 00:10:35.128 "data_size": 65536 00:10:35.128 }, 00:10:35.128 { 00:10:35.128 "name": "BaseBdev3", 00:10:35.128 "uuid": "4b6c0258-0d72-4e06-9798-46c26f228a43", 00:10:35.128 "is_configured": true, 00:10:35.128 "data_offset": 0, 00:10:35.128 "data_size": 65536 00:10:35.128 }, 00:10:35.128 { 00:10:35.128 "name": "BaseBdev4", 00:10:35.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.128 "is_configured": false, 00:10:35.128 "data_offset": 0, 00:10:35.128 "data_size": 0 00:10:35.128 } 00:10:35.128 ] 00:10:35.128 }' 00:10:35.128 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.128 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.388 [2024-11-18 03:59:31.927485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.388 [2024-11-18 03:59:31.927641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:35.388 [2024-11-18 03:59:31.927669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:35.388 [2024-11-18 03:59:31.928028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:35.388 [2024-11-18 03:59:31.928264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:35.388 [2024-11-18 03:59:31.928309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:35.388 [2024-11-18 03:59:31.928634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.388 BaseBdev4 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.388 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.388 [ 00:10:35.388 { 00:10:35.388 "name": "BaseBdev4", 00:10:35.388 "aliases": [ 00:10:35.388 "6b270a70-45fe-4288-a475-2c4186ed41a4" 00:10:35.388 ], 00:10:35.388 "product_name": "Malloc disk", 00:10:35.388 "block_size": 512, 00:10:35.388 "num_blocks": 65536, 00:10:35.388 "uuid": "6b270a70-45fe-4288-a475-2c4186ed41a4", 00:10:35.388 "assigned_rate_limits": { 00:10:35.388 "rw_ios_per_sec": 0, 00:10:35.388 "rw_mbytes_per_sec": 0, 00:10:35.388 "r_mbytes_per_sec": 0, 00:10:35.388 "w_mbytes_per_sec": 0 00:10:35.388 }, 00:10:35.388 "claimed": true, 00:10:35.388 "claim_type": "exclusive_write", 00:10:35.388 "zoned": false, 00:10:35.388 "supported_io_types": { 00:10:35.388 "read": true, 00:10:35.388 "write": true, 00:10:35.388 "unmap": true, 00:10:35.388 "flush": true, 00:10:35.388 "reset": true, 00:10:35.388 "nvme_admin": false, 00:10:35.388 "nvme_io": false, 00:10:35.388 "nvme_io_md": false, 00:10:35.388 "write_zeroes": true, 00:10:35.388 "zcopy": true, 00:10:35.388 "get_zone_info": false, 00:10:35.388 "zone_management": false, 00:10:35.388 "zone_append": false, 00:10:35.388 "compare": false, 00:10:35.388 "compare_and_write": false, 00:10:35.388 "abort": true, 00:10:35.388 "seek_hole": false, 00:10:35.388 "seek_data": false, 00:10:35.388 "copy": true, 00:10:35.388 "nvme_iov_md": false 00:10:35.388 }, 00:10:35.388 "memory_domains": [ 00:10:35.388 { 00:10:35.388 "dma_device_id": "system", 00:10:35.388 "dma_device_type": 1 00:10:35.389 }, 00:10:35.389 { 00:10:35.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.389 "dma_device_type": 2 00:10:35.389 } 00:10:35.389 ], 00:10:35.389 "driver_specific": {} 00:10:35.389 } 00:10:35.389 ] 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.389 03:59:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.389 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.389 "name": "Existed_Raid", 00:10:35.389 "uuid": "380535eb-81a9-4f26-bb5f-14ac1aabd259", 00:10:35.389 "strip_size_kb": 64, 00:10:35.389 "state": "online", 00:10:35.389 "raid_level": "raid0", 00:10:35.389 "superblock": false, 00:10:35.389 "num_base_bdevs": 4, 00:10:35.389 "num_base_bdevs_discovered": 4, 00:10:35.389 "num_base_bdevs_operational": 4, 00:10:35.389 "base_bdevs_list": [ 00:10:35.389 { 00:10:35.389 "name": "BaseBdev1", 00:10:35.389 "uuid": "12f9c50f-1db8-47f1-a07e-abe3db614053", 00:10:35.389 "is_configured": true, 00:10:35.389 "data_offset": 0, 00:10:35.389 "data_size": 65536 00:10:35.389 }, 00:10:35.389 { 00:10:35.389 "name": "BaseBdev2", 00:10:35.389 "uuid": "0547fffb-a8ae-4c43-b71a-fef7cbd78981", 00:10:35.389 "is_configured": true, 00:10:35.389 "data_offset": 0, 00:10:35.389 "data_size": 65536 00:10:35.389 }, 00:10:35.389 { 00:10:35.389 "name": "BaseBdev3", 00:10:35.389 "uuid": "4b6c0258-0d72-4e06-9798-46c26f228a43", 00:10:35.389 "is_configured": true, 00:10:35.389 "data_offset": 0, 00:10:35.389 "data_size": 65536 00:10:35.389 }, 00:10:35.389 { 00:10:35.389 "name": "BaseBdev4", 00:10:35.389 "uuid": "6b270a70-45fe-4288-a475-2c4186ed41a4", 00:10:35.389 "is_configured": true, 00:10:35.389 "data_offset": 0, 00:10:35.389 "data_size": 65536 00:10:35.389 } 00:10:35.389 ] 00:10:35.389 }' 00:10:35.389 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.389 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.959 [2024-11-18 03:59:32.407096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.959 "name": "Existed_Raid", 00:10:35.959 "aliases": [ 00:10:35.959 "380535eb-81a9-4f26-bb5f-14ac1aabd259" 00:10:35.959 ], 00:10:35.959 "product_name": "Raid Volume", 00:10:35.959 "block_size": 512, 00:10:35.959 "num_blocks": 262144, 00:10:35.959 "uuid": "380535eb-81a9-4f26-bb5f-14ac1aabd259", 00:10:35.959 "assigned_rate_limits": { 00:10:35.959 "rw_ios_per_sec": 0, 00:10:35.959 "rw_mbytes_per_sec": 0, 00:10:35.959 "r_mbytes_per_sec": 0, 00:10:35.959 "w_mbytes_per_sec": 0 00:10:35.959 }, 00:10:35.959 "claimed": false, 00:10:35.959 "zoned": false, 00:10:35.959 "supported_io_types": { 00:10:35.959 "read": true, 00:10:35.959 "write": true, 00:10:35.959 "unmap": true, 00:10:35.959 "flush": true, 00:10:35.959 "reset": true, 00:10:35.959 "nvme_admin": false, 00:10:35.959 "nvme_io": false, 00:10:35.959 "nvme_io_md": false, 00:10:35.959 "write_zeroes": true, 00:10:35.959 "zcopy": false, 00:10:35.959 "get_zone_info": false, 00:10:35.959 "zone_management": false, 00:10:35.959 "zone_append": false, 00:10:35.959 "compare": false, 00:10:35.959 "compare_and_write": false, 00:10:35.959 "abort": false, 00:10:35.959 "seek_hole": false, 00:10:35.959 "seek_data": false, 00:10:35.959 "copy": false, 00:10:35.959 "nvme_iov_md": false 00:10:35.959 }, 00:10:35.959 "memory_domains": [ 00:10:35.959 { 00:10:35.959 "dma_device_id": "system", 00:10:35.959 "dma_device_type": 1 00:10:35.959 }, 00:10:35.959 { 00:10:35.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.959 "dma_device_type": 2 00:10:35.959 }, 00:10:35.959 { 00:10:35.959 "dma_device_id": "system", 00:10:35.959 "dma_device_type": 1 00:10:35.959 }, 00:10:35.959 { 00:10:35.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.959 "dma_device_type": 2 00:10:35.959 }, 00:10:35.959 { 00:10:35.959 "dma_device_id": "system", 00:10:35.959 "dma_device_type": 1 00:10:35.959 }, 00:10:35.959 { 00:10:35.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.959 "dma_device_type": 2 00:10:35.959 }, 00:10:35.959 { 00:10:35.959 "dma_device_id": "system", 00:10:35.959 "dma_device_type": 1 00:10:35.959 }, 00:10:35.959 { 00:10:35.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.959 "dma_device_type": 2 00:10:35.959 } 00:10:35.959 ], 00:10:35.959 "driver_specific": { 00:10:35.959 "raid": { 00:10:35.959 "uuid": "380535eb-81a9-4f26-bb5f-14ac1aabd259", 00:10:35.959 "strip_size_kb": 64, 00:10:35.959 "state": "online", 00:10:35.959 "raid_level": "raid0", 00:10:35.959 "superblock": false, 00:10:35.959 "num_base_bdevs": 4, 00:10:35.959 "num_base_bdevs_discovered": 4, 00:10:35.959 "num_base_bdevs_operational": 4, 00:10:35.959 "base_bdevs_list": [ 00:10:35.959 { 00:10:35.959 "name": "BaseBdev1", 00:10:35.959 "uuid": "12f9c50f-1db8-47f1-a07e-abe3db614053", 00:10:35.959 "is_configured": true, 00:10:35.959 "data_offset": 0, 00:10:35.959 "data_size": 65536 00:10:35.959 }, 00:10:35.959 { 00:10:35.959 "name": "BaseBdev2", 00:10:35.959 "uuid": "0547fffb-a8ae-4c43-b71a-fef7cbd78981", 00:10:35.959 "is_configured": true, 00:10:35.959 "data_offset": 0, 00:10:35.959 "data_size": 65536 00:10:35.959 }, 00:10:35.959 { 00:10:35.959 "name": "BaseBdev3", 00:10:35.959 "uuid": "4b6c0258-0d72-4e06-9798-46c26f228a43", 00:10:35.959 "is_configured": true, 00:10:35.959 "data_offset": 0, 00:10:35.959 "data_size": 65536 00:10:35.959 }, 00:10:35.959 { 00:10:35.959 "name": "BaseBdev4", 00:10:35.959 "uuid": "6b270a70-45fe-4288-a475-2c4186ed41a4", 00:10:35.959 "is_configured": true, 00:10:35.959 "data_offset": 0, 00:10:35.959 "data_size": 65536 00:10:35.959 } 00:10:35.959 ] 00:10:35.959 } 00:10:35.959 } 00:10:35.959 }' 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:35.959 BaseBdev2 00:10:35.959 BaseBdev3 00:10:35.959 BaseBdev4' 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.959 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.220 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.220 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.220 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.220 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:36.220 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.220 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.220 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.220 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.220 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.221 [2024-11-18 03:59:32.706251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.221 [2024-11-18 03:59:32.706368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.221 [2024-11-18 03:59:32.706448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.221 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.503 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.503 "name": "Existed_Raid", 00:10:36.503 "uuid": "380535eb-81a9-4f26-bb5f-14ac1aabd259", 00:10:36.503 "strip_size_kb": 64, 00:10:36.503 "state": "offline", 00:10:36.503 "raid_level": "raid0", 00:10:36.503 "superblock": false, 00:10:36.503 "num_base_bdevs": 4, 00:10:36.503 "num_base_bdevs_discovered": 3, 00:10:36.503 "num_base_bdevs_operational": 3, 00:10:36.503 "base_bdevs_list": [ 00:10:36.503 { 00:10:36.503 "name": null, 00:10:36.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.503 "is_configured": false, 00:10:36.503 "data_offset": 0, 00:10:36.503 "data_size": 65536 00:10:36.503 }, 00:10:36.503 { 00:10:36.503 "name": "BaseBdev2", 00:10:36.503 "uuid": "0547fffb-a8ae-4c43-b71a-fef7cbd78981", 00:10:36.503 "is_configured": true, 00:10:36.503 "data_offset": 0, 00:10:36.503 "data_size": 65536 00:10:36.503 }, 00:10:36.503 { 00:10:36.503 "name": "BaseBdev3", 00:10:36.503 "uuid": "4b6c0258-0d72-4e06-9798-46c26f228a43", 00:10:36.503 "is_configured": true, 00:10:36.503 "data_offset": 0, 00:10:36.503 "data_size": 65536 00:10:36.503 }, 00:10:36.503 { 00:10:36.503 "name": "BaseBdev4", 00:10:36.503 "uuid": "6b270a70-45fe-4288-a475-2c4186ed41a4", 00:10:36.503 "is_configured": true, 00:10:36.503 "data_offset": 0, 00:10:36.503 "data_size": 65536 00:10:36.503 } 00:10:36.503 ] 00:10:36.503 }' 00:10:36.503 03:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.503 03:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.809 [2024-11-18 03:59:33.276641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.809 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.809 [2024-11-18 03:59:33.436881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.068 [2024-11-18 03:59:33.590450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:37.068 [2024-11-18 03:59:33.590592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.068 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:37.069 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.069 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.329 BaseBdev2 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.329 [ 00:10:37.329 { 00:10:37.329 "name": "BaseBdev2", 00:10:37.329 "aliases": [ 00:10:37.329 "dfce1e84-2994-43c5-8b48-9009f14fe903" 00:10:37.329 ], 00:10:37.329 "product_name": "Malloc disk", 00:10:37.329 "block_size": 512, 00:10:37.329 "num_blocks": 65536, 00:10:37.329 "uuid": "dfce1e84-2994-43c5-8b48-9009f14fe903", 00:10:37.329 "assigned_rate_limits": { 00:10:37.329 "rw_ios_per_sec": 0, 00:10:37.329 "rw_mbytes_per_sec": 0, 00:10:37.329 "r_mbytes_per_sec": 0, 00:10:37.329 "w_mbytes_per_sec": 0 00:10:37.329 }, 00:10:37.329 "claimed": false, 00:10:37.329 "zoned": false, 00:10:37.329 "supported_io_types": { 00:10:37.329 "read": true, 00:10:37.329 "write": true, 00:10:37.329 "unmap": true, 00:10:37.329 "flush": true, 00:10:37.329 "reset": true, 00:10:37.329 "nvme_admin": false, 00:10:37.329 "nvme_io": false, 00:10:37.329 "nvme_io_md": false, 00:10:37.329 "write_zeroes": true, 00:10:37.329 "zcopy": true, 00:10:37.329 "get_zone_info": false, 00:10:37.329 "zone_management": false, 00:10:37.329 "zone_append": false, 00:10:37.329 "compare": false, 00:10:37.329 "compare_and_write": false, 00:10:37.329 "abort": true, 00:10:37.329 "seek_hole": false, 00:10:37.329 "seek_data": false, 00:10:37.329 "copy": true, 00:10:37.329 "nvme_iov_md": false 00:10:37.329 }, 00:10:37.329 "memory_domains": [ 00:10:37.329 { 00:10:37.329 "dma_device_id": "system", 00:10:37.329 "dma_device_type": 1 00:10:37.329 }, 00:10:37.329 { 00:10:37.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.329 "dma_device_type": 2 00:10:37.329 } 00:10:37.329 ], 00:10:37.329 "driver_specific": {} 00:10:37.329 } 00:10:37.329 ] 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.329 BaseBdev3 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.329 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.329 [ 00:10:37.329 { 00:10:37.329 "name": "BaseBdev3", 00:10:37.329 "aliases": [ 00:10:37.329 "1f2f8d86-4789-4a86-9245-68e225f6c99f" 00:10:37.329 ], 00:10:37.329 "product_name": "Malloc disk", 00:10:37.329 "block_size": 512, 00:10:37.329 "num_blocks": 65536, 00:10:37.329 "uuid": "1f2f8d86-4789-4a86-9245-68e225f6c99f", 00:10:37.329 "assigned_rate_limits": { 00:10:37.329 "rw_ios_per_sec": 0, 00:10:37.329 "rw_mbytes_per_sec": 0, 00:10:37.329 "r_mbytes_per_sec": 0, 00:10:37.329 "w_mbytes_per_sec": 0 00:10:37.329 }, 00:10:37.329 "claimed": false, 00:10:37.329 "zoned": false, 00:10:37.329 "supported_io_types": { 00:10:37.329 "read": true, 00:10:37.329 "write": true, 00:10:37.329 "unmap": true, 00:10:37.329 "flush": true, 00:10:37.329 "reset": true, 00:10:37.329 "nvme_admin": false, 00:10:37.329 "nvme_io": false, 00:10:37.329 "nvme_io_md": false, 00:10:37.329 "write_zeroes": true, 00:10:37.329 "zcopy": true, 00:10:37.329 "get_zone_info": false, 00:10:37.329 "zone_management": false, 00:10:37.329 "zone_append": false, 00:10:37.329 "compare": false, 00:10:37.329 "compare_and_write": false, 00:10:37.329 "abort": true, 00:10:37.329 "seek_hole": false, 00:10:37.329 "seek_data": false, 00:10:37.329 "copy": true, 00:10:37.329 "nvme_iov_md": false 00:10:37.329 }, 00:10:37.329 "memory_domains": [ 00:10:37.329 { 00:10:37.329 "dma_device_id": "system", 00:10:37.329 "dma_device_type": 1 00:10:37.329 }, 00:10:37.329 { 00:10:37.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.329 "dma_device_type": 2 00:10:37.329 } 00:10:37.329 ], 00:10:37.329 "driver_specific": {} 00:10:37.329 } 00:10:37.330 ] 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.330 BaseBdev4 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.330 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.591 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.591 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:37.591 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.591 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.591 [ 00:10:37.591 { 00:10:37.591 "name": "BaseBdev4", 00:10:37.591 "aliases": [ 00:10:37.591 "91ad0ef5-18a5-4995-8065-76f6786a0d3f" 00:10:37.591 ], 00:10:37.591 "product_name": "Malloc disk", 00:10:37.591 "block_size": 512, 00:10:37.591 "num_blocks": 65536, 00:10:37.591 "uuid": "91ad0ef5-18a5-4995-8065-76f6786a0d3f", 00:10:37.591 "assigned_rate_limits": { 00:10:37.591 "rw_ios_per_sec": 0, 00:10:37.591 "rw_mbytes_per_sec": 0, 00:10:37.591 "r_mbytes_per_sec": 0, 00:10:37.591 "w_mbytes_per_sec": 0 00:10:37.591 }, 00:10:37.591 "claimed": false, 00:10:37.591 "zoned": false, 00:10:37.591 "supported_io_types": { 00:10:37.591 "read": true, 00:10:37.591 "write": true, 00:10:37.591 "unmap": true, 00:10:37.591 "flush": true, 00:10:37.591 "reset": true, 00:10:37.591 "nvme_admin": false, 00:10:37.591 "nvme_io": false, 00:10:37.591 "nvme_io_md": false, 00:10:37.591 "write_zeroes": true, 00:10:37.591 "zcopy": true, 00:10:37.591 "get_zone_info": false, 00:10:37.591 "zone_management": false, 00:10:37.591 "zone_append": false, 00:10:37.591 "compare": false, 00:10:37.591 "compare_and_write": false, 00:10:37.591 "abort": true, 00:10:37.591 "seek_hole": false, 00:10:37.591 "seek_data": false, 00:10:37.591 "copy": true, 00:10:37.591 "nvme_iov_md": false 00:10:37.591 }, 00:10:37.591 "memory_domains": [ 00:10:37.591 { 00:10:37.591 "dma_device_id": "system", 00:10:37.591 "dma_device_type": 1 00:10:37.591 }, 00:10:37.591 { 00:10:37.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.591 "dma_device_type": 2 00:10:37.591 } 00:10:37.591 ], 00:10:37.591 "driver_specific": {} 00:10:37.591 } 00:10:37.591 ] 00:10:37.591 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.591 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.591 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.591 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.591 03:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.591 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.591 03:59:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.591 [2024-11-18 03:59:33.998899] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.591 [2024-11-18 03:59:33.999037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.591 [2024-11-18 03:59:33.999083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.591 [2024-11-18 03:59:34.001367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.591 [2024-11-18 03:59:34.001466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.591 "name": "Existed_Raid", 00:10:37.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.591 "strip_size_kb": 64, 00:10:37.591 "state": "configuring", 00:10:37.591 "raid_level": "raid0", 00:10:37.591 "superblock": false, 00:10:37.591 "num_base_bdevs": 4, 00:10:37.591 "num_base_bdevs_discovered": 3, 00:10:37.591 "num_base_bdevs_operational": 4, 00:10:37.591 "base_bdevs_list": [ 00:10:37.591 { 00:10:37.591 "name": "BaseBdev1", 00:10:37.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.591 "is_configured": false, 00:10:37.591 "data_offset": 0, 00:10:37.591 "data_size": 0 00:10:37.591 }, 00:10:37.591 { 00:10:37.591 "name": "BaseBdev2", 00:10:37.591 "uuid": "dfce1e84-2994-43c5-8b48-9009f14fe903", 00:10:37.591 "is_configured": true, 00:10:37.591 "data_offset": 0, 00:10:37.591 "data_size": 65536 00:10:37.591 }, 00:10:37.591 { 00:10:37.591 "name": "BaseBdev3", 00:10:37.591 "uuid": "1f2f8d86-4789-4a86-9245-68e225f6c99f", 00:10:37.591 "is_configured": true, 00:10:37.591 "data_offset": 0, 00:10:37.591 "data_size": 65536 00:10:37.591 }, 00:10:37.591 { 00:10:37.591 "name": "BaseBdev4", 00:10:37.591 "uuid": "91ad0ef5-18a5-4995-8065-76f6786a0d3f", 00:10:37.591 "is_configured": true, 00:10:37.591 "data_offset": 0, 00:10:37.591 "data_size": 65536 00:10:37.591 } 00:10:37.591 ] 00:10:37.591 }' 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.591 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.852 [2024-11-18 03:59:34.386265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.852 "name": "Existed_Raid", 00:10:37.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.852 "strip_size_kb": 64, 00:10:37.852 "state": "configuring", 00:10:37.852 "raid_level": "raid0", 00:10:37.852 "superblock": false, 00:10:37.852 "num_base_bdevs": 4, 00:10:37.852 "num_base_bdevs_discovered": 2, 00:10:37.852 "num_base_bdevs_operational": 4, 00:10:37.852 "base_bdevs_list": [ 00:10:37.852 { 00:10:37.852 "name": "BaseBdev1", 00:10:37.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.852 "is_configured": false, 00:10:37.852 "data_offset": 0, 00:10:37.852 "data_size": 0 00:10:37.852 }, 00:10:37.852 { 00:10:37.852 "name": null, 00:10:37.852 "uuid": "dfce1e84-2994-43c5-8b48-9009f14fe903", 00:10:37.852 "is_configured": false, 00:10:37.852 "data_offset": 0, 00:10:37.852 "data_size": 65536 00:10:37.852 }, 00:10:37.852 { 00:10:37.852 "name": "BaseBdev3", 00:10:37.852 "uuid": "1f2f8d86-4789-4a86-9245-68e225f6c99f", 00:10:37.852 "is_configured": true, 00:10:37.852 "data_offset": 0, 00:10:37.852 "data_size": 65536 00:10:37.852 }, 00:10:37.852 { 00:10:37.852 "name": "BaseBdev4", 00:10:37.852 "uuid": "91ad0ef5-18a5-4995-8065-76f6786a0d3f", 00:10:37.852 "is_configured": true, 00:10:37.852 "data_offset": 0, 00:10:37.852 "data_size": 65536 00:10:37.852 } 00:10:37.852 ] 00:10:37.852 }' 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.852 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 [2024-11-18 03:59:34.928220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.421 BaseBdev1 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.421 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.422 [ 00:10:38.422 { 00:10:38.422 "name": "BaseBdev1", 00:10:38.422 "aliases": [ 00:10:38.422 "5286ebf7-08d3-4414-8931-ce1753bf2f8e" 00:10:38.422 ], 00:10:38.422 "product_name": "Malloc disk", 00:10:38.422 "block_size": 512, 00:10:38.422 "num_blocks": 65536, 00:10:38.422 "uuid": "5286ebf7-08d3-4414-8931-ce1753bf2f8e", 00:10:38.422 "assigned_rate_limits": { 00:10:38.422 "rw_ios_per_sec": 0, 00:10:38.422 "rw_mbytes_per_sec": 0, 00:10:38.422 "r_mbytes_per_sec": 0, 00:10:38.422 "w_mbytes_per_sec": 0 00:10:38.422 }, 00:10:38.422 "claimed": true, 00:10:38.422 "claim_type": "exclusive_write", 00:10:38.422 "zoned": false, 00:10:38.422 "supported_io_types": { 00:10:38.422 "read": true, 00:10:38.422 "write": true, 00:10:38.422 "unmap": true, 00:10:38.422 "flush": true, 00:10:38.422 "reset": true, 00:10:38.422 "nvme_admin": false, 00:10:38.422 "nvme_io": false, 00:10:38.422 "nvme_io_md": false, 00:10:38.422 "write_zeroes": true, 00:10:38.422 "zcopy": true, 00:10:38.422 "get_zone_info": false, 00:10:38.422 "zone_management": false, 00:10:38.422 "zone_append": false, 00:10:38.422 "compare": false, 00:10:38.422 "compare_and_write": false, 00:10:38.422 "abort": true, 00:10:38.422 "seek_hole": false, 00:10:38.422 "seek_data": false, 00:10:38.422 "copy": true, 00:10:38.422 "nvme_iov_md": false 00:10:38.422 }, 00:10:38.422 "memory_domains": [ 00:10:38.422 { 00:10:38.422 "dma_device_id": "system", 00:10:38.422 "dma_device_type": 1 00:10:38.422 }, 00:10:38.422 { 00:10:38.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.422 "dma_device_type": 2 00:10:38.422 } 00:10:38.422 ], 00:10:38.422 "driver_specific": {} 00:10:38.422 } 00:10:38.422 ] 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.422 03:59:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.422 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.422 "name": "Existed_Raid", 00:10:38.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.422 "strip_size_kb": 64, 00:10:38.422 "state": "configuring", 00:10:38.422 "raid_level": "raid0", 00:10:38.422 "superblock": false, 00:10:38.422 "num_base_bdevs": 4, 00:10:38.422 "num_base_bdevs_discovered": 3, 00:10:38.422 "num_base_bdevs_operational": 4, 00:10:38.422 "base_bdevs_list": [ 00:10:38.422 { 00:10:38.422 "name": "BaseBdev1", 00:10:38.422 "uuid": "5286ebf7-08d3-4414-8931-ce1753bf2f8e", 00:10:38.422 "is_configured": true, 00:10:38.422 "data_offset": 0, 00:10:38.422 "data_size": 65536 00:10:38.422 }, 00:10:38.422 { 00:10:38.422 "name": null, 00:10:38.422 "uuid": "dfce1e84-2994-43c5-8b48-9009f14fe903", 00:10:38.422 "is_configured": false, 00:10:38.422 "data_offset": 0, 00:10:38.422 "data_size": 65536 00:10:38.422 }, 00:10:38.422 { 00:10:38.422 "name": "BaseBdev3", 00:10:38.422 "uuid": "1f2f8d86-4789-4a86-9245-68e225f6c99f", 00:10:38.422 "is_configured": true, 00:10:38.422 "data_offset": 0, 00:10:38.422 "data_size": 65536 00:10:38.422 }, 00:10:38.422 { 00:10:38.422 "name": "BaseBdev4", 00:10:38.422 "uuid": "91ad0ef5-18a5-4995-8065-76f6786a0d3f", 00:10:38.422 "is_configured": true, 00:10:38.422 "data_offset": 0, 00:10:38.422 "data_size": 65536 00:10:38.422 } 00:10:38.422 ] 00:10:38.422 }' 00:10:38.422 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.422 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.992 [2024-11-18 03:59:35.451424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.992 "name": "Existed_Raid", 00:10:38.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.992 "strip_size_kb": 64, 00:10:38.992 "state": "configuring", 00:10:38.992 "raid_level": "raid0", 00:10:38.992 "superblock": false, 00:10:38.992 "num_base_bdevs": 4, 00:10:38.992 "num_base_bdevs_discovered": 2, 00:10:38.992 "num_base_bdevs_operational": 4, 00:10:38.992 "base_bdevs_list": [ 00:10:38.992 { 00:10:38.992 "name": "BaseBdev1", 00:10:38.992 "uuid": "5286ebf7-08d3-4414-8931-ce1753bf2f8e", 00:10:38.992 "is_configured": true, 00:10:38.992 "data_offset": 0, 00:10:38.992 "data_size": 65536 00:10:38.992 }, 00:10:38.992 { 00:10:38.992 "name": null, 00:10:38.992 "uuid": "dfce1e84-2994-43c5-8b48-9009f14fe903", 00:10:38.992 "is_configured": false, 00:10:38.992 "data_offset": 0, 00:10:38.992 "data_size": 65536 00:10:38.992 }, 00:10:38.992 { 00:10:38.992 "name": null, 00:10:38.992 "uuid": "1f2f8d86-4789-4a86-9245-68e225f6c99f", 00:10:38.992 "is_configured": false, 00:10:38.992 "data_offset": 0, 00:10:38.992 "data_size": 65536 00:10:38.992 }, 00:10:38.992 { 00:10:38.992 "name": "BaseBdev4", 00:10:38.992 "uuid": "91ad0ef5-18a5-4995-8065-76f6786a0d3f", 00:10:38.992 "is_configured": true, 00:10:38.992 "data_offset": 0, 00:10:38.992 "data_size": 65536 00:10:38.992 } 00:10:38.992 ] 00:10:38.992 }' 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.992 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.252 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.252 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.252 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.252 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.252 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.512 [2024-11-18 03:59:35.922649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.512 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.512 "name": "Existed_Raid", 00:10:39.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.512 "strip_size_kb": 64, 00:10:39.512 "state": "configuring", 00:10:39.512 "raid_level": "raid0", 00:10:39.512 "superblock": false, 00:10:39.512 "num_base_bdevs": 4, 00:10:39.512 "num_base_bdevs_discovered": 3, 00:10:39.512 "num_base_bdevs_operational": 4, 00:10:39.512 "base_bdevs_list": [ 00:10:39.512 { 00:10:39.512 "name": "BaseBdev1", 00:10:39.512 "uuid": "5286ebf7-08d3-4414-8931-ce1753bf2f8e", 00:10:39.512 "is_configured": true, 00:10:39.512 "data_offset": 0, 00:10:39.512 "data_size": 65536 00:10:39.512 }, 00:10:39.512 { 00:10:39.512 "name": null, 00:10:39.512 "uuid": "dfce1e84-2994-43c5-8b48-9009f14fe903", 00:10:39.512 "is_configured": false, 00:10:39.512 "data_offset": 0, 00:10:39.512 "data_size": 65536 00:10:39.513 }, 00:10:39.513 { 00:10:39.513 "name": "BaseBdev3", 00:10:39.513 "uuid": "1f2f8d86-4789-4a86-9245-68e225f6c99f", 00:10:39.513 "is_configured": true, 00:10:39.513 "data_offset": 0, 00:10:39.513 "data_size": 65536 00:10:39.513 }, 00:10:39.513 { 00:10:39.513 "name": "BaseBdev4", 00:10:39.513 "uuid": "91ad0ef5-18a5-4995-8065-76f6786a0d3f", 00:10:39.513 "is_configured": true, 00:10:39.513 "data_offset": 0, 00:10:39.513 "data_size": 65536 00:10:39.513 } 00:10:39.513 ] 00:10:39.513 }' 00:10:39.513 03:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.513 03:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.772 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.772 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.772 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.772 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.772 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.032 [2024-11-18 03:59:36.437816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.032 "name": "Existed_Raid", 00:10:40.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.032 "strip_size_kb": 64, 00:10:40.032 "state": "configuring", 00:10:40.032 "raid_level": "raid0", 00:10:40.032 "superblock": false, 00:10:40.032 "num_base_bdevs": 4, 00:10:40.032 "num_base_bdevs_discovered": 2, 00:10:40.032 "num_base_bdevs_operational": 4, 00:10:40.032 "base_bdevs_list": [ 00:10:40.032 { 00:10:40.032 "name": null, 00:10:40.032 "uuid": "5286ebf7-08d3-4414-8931-ce1753bf2f8e", 00:10:40.032 "is_configured": false, 00:10:40.032 "data_offset": 0, 00:10:40.032 "data_size": 65536 00:10:40.032 }, 00:10:40.032 { 00:10:40.032 "name": null, 00:10:40.032 "uuid": "dfce1e84-2994-43c5-8b48-9009f14fe903", 00:10:40.032 "is_configured": false, 00:10:40.032 "data_offset": 0, 00:10:40.032 "data_size": 65536 00:10:40.032 }, 00:10:40.032 { 00:10:40.032 "name": "BaseBdev3", 00:10:40.032 "uuid": "1f2f8d86-4789-4a86-9245-68e225f6c99f", 00:10:40.032 "is_configured": true, 00:10:40.032 "data_offset": 0, 00:10:40.032 "data_size": 65536 00:10:40.032 }, 00:10:40.032 { 00:10:40.032 "name": "BaseBdev4", 00:10:40.032 "uuid": "91ad0ef5-18a5-4995-8065-76f6786a0d3f", 00:10:40.032 "is_configured": true, 00:10:40.032 "data_offset": 0, 00:10:40.032 "data_size": 65536 00:10:40.032 } 00:10:40.032 ] 00:10:40.032 }' 00:10:40.032 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.033 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.602 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:40.602 03:59:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.602 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.602 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.602 03:59:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.602 [2024-11-18 03:59:37.012277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.602 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.603 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.603 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.603 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.603 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.603 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.603 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.603 "name": "Existed_Raid", 00:10:40.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.603 "strip_size_kb": 64, 00:10:40.603 "state": "configuring", 00:10:40.603 "raid_level": "raid0", 00:10:40.603 "superblock": false, 00:10:40.603 "num_base_bdevs": 4, 00:10:40.603 "num_base_bdevs_discovered": 3, 00:10:40.603 "num_base_bdevs_operational": 4, 00:10:40.603 "base_bdevs_list": [ 00:10:40.603 { 00:10:40.603 "name": null, 00:10:40.603 "uuid": "5286ebf7-08d3-4414-8931-ce1753bf2f8e", 00:10:40.603 "is_configured": false, 00:10:40.603 "data_offset": 0, 00:10:40.603 "data_size": 65536 00:10:40.603 }, 00:10:40.603 { 00:10:40.603 "name": "BaseBdev2", 00:10:40.603 "uuid": "dfce1e84-2994-43c5-8b48-9009f14fe903", 00:10:40.603 "is_configured": true, 00:10:40.603 "data_offset": 0, 00:10:40.603 "data_size": 65536 00:10:40.603 }, 00:10:40.603 { 00:10:40.603 "name": "BaseBdev3", 00:10:40.603 "uuid": "1f2f8d86-4789-4a86-9245-68e225f6c99f", 00:10:40.603 "is_configured": true, 00:10:40.603 "data_offset": 0, 00:10:40.603 "data_size": 65536 00:10:40.603 }, 00:10:40.603 { 00:10:40.603 "name": "BaseBdev4", 00:10:40.603 "uuid": "91ad0ef5-18a5-4995-8065-76f6786a0d3f", 00:10:40.603 "is_configured": true, 00:10:40.603 "data_offset": 0, 00:10:40.603 "data_size": 65536 00:10:40.603 } 00:10:40.603 ] 00:10:40.603 }' 00:10:40.603 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.603 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.863 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.863 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.863 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.863 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.123 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.123 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:41.123 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.123 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:41.123 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.123 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.123 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.123 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5286ebf7-08d3-4414-8931-ce1753bf2f8e 00:10:41.123 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.123 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.123 [2024-11-18 03:59:37.598583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:41.123 [2024-11-18 03:59:37.598736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:41.123 [2024-11-18 03:59:37.598762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:41.123 [2024-11-18 03:59:37.599106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:41.123 NewBaseBdev 00:10:41.123 [2024-11-18 03:59:37.599312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:41.123 [2024-11-18 03:59:37.599329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:41.124 [2024-11-18 03:59:37.599629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.124 [ 00:10:41.124 { 00:10:41.124 "name": "NewBaseBdev", 00:10:41.124 "aliases": [ 00:10:41.124 "5286ebf7-08d3-4414-8931-ce1753bf2f8e" 00:10:41.124 ], 00:10:41.124 "product_name": "Malloc disk", 00:10:41.124 "block_size": 512, 00:10:41.124 "num_blocks": 65536, 00:10:41.124 "uuid": "5286ebf7-08d3-4414-8931-ce1753bf2f8e", 00:10:41.124 "assigned_rate_limits": { 00:10:41.124 "rw_ios_per_sec": 0, 00:10:41.124 "rw_mbytes_per_sec": 0, 00:10:41.124 "r_mbytes_per_sec": 0, 00:10:41.124 "w_mbytes_per_sec": 0 00:10:41.124 }, 00:10:41.124 "claimed": true, 00:10:41.124 "claim_type": "exclusive_write", 00:10:41.124 "zoned": false, 00:10:41.124 "supported_io_types": { 00:10:41.124 "read": true, 00:10:41.124 "write": true, 00:10:41.124 "unmap": true, 00:10:41.124 "flush": true, 00:10:41.124 "reset": true, 00:10:41.124 "nvme_admin": false, 00:10:41.124 "nvme_io": false, 00:10:41.124 "nvme_io_md": false, 00:10:41.124 "write_zeroes": true, 00:10:41.124 "zcopy": true, 00:10:41.124 "get_zone_info": false, 00:10:41.124 "zone_management": false, 00:10:41.124 "zone_append": false, 00:10:41.124 "compare": false, 00:10:41.124 "compare_and_write": false, 00:10:41.124 "abort": true, 00:10:41.124 "seek_hole": false, 00:10:41.124 "seek_data": false, 00:10:41.124 "copy": true, 00:10:41.124 "nvme_iov_md": false 00:10:41.124 }, 00:10:41.124 "memory_domains": [ 00:10:41.124 { 00:10:41.124 "dma_device_id": "system", 00:10:41.124 "dma_device_type": 1 00:10:41.124 }, 00:10:41.124 { 00:10:41.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.124 "dma_device_type": 2 00:10:41.124 } 00:10:41.124 ], 00:10:41.124 "driver_specific": {} 00:10:41.124 } 00:10:41.124 ] 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.124 "name": "Existed_Raid", 00:10:41.124 "uuid": "87f47518-3758-46ed-ba91-76dc24062f16", 00:10:41.124 "strip_size_kb": 64, 00:10:41.124 "state": "online", 00:10:41.124 "raid_level": "raid0", 00:10:41.124 "superblock": false, 00:10:41.124 "num_base_bdevs": 4, 00:10:41.124 "num_base_bdevs_discovered": 4, 00:10:41.124 "num_base_bdevs_operational": 4, 00:10:41.124 "base_bdevs_list": [ 00:10:41.124 { 00:10:41.124 "name": "NewBaseBdev", 00:10:41.124 "uuid": "5286ebf7-08d3-4414-8931-ce1753bf2f8e", 00:10:41.124 "is_configured": true, 00:10:41.124 "data_offset": 0, 00:10:41.124 "data_size": 65536 00:10:41.124 }, 00:10:41.124 { 00:10:41.124 "name": "BaseBdev2", 00:10:41.124 "uuid": "dfce1e84-2994-43c5-8b48-9009f14fe903", 00:10:41.124 "is_configured": true, 00:10:41.124 "data_offset": 0, 00:10:41.124 "data_size": 65536 00:10:41.124 }, 00:10:41.124 { 00:10:41.124 "name": "BaseBdev3", 00:10:41.124 "uuid": "1f2f8d86-4789-4a86-9245-68e225f6c99f", 00:10:41.124 "is_configured": true, 00:10:41.124 "data_offset": 0, 00:10:41.124 "data_size": 65536 00:10:41.124 }, 00:10:41.124 { 00:10:41.124 "name": "BaseBdev4", 00:10:41.124 "uuid": "91ad0ef5-18a5-4995-8065-76f6786a0d3f", 00:10:41.124 "is_configured": true, 00:10:41.124 "data_offset": 0, 00:10:41.124 "data_size": 65536 00:10:41.124 } 00:10:41.124 ] 00:10:41.124 }' 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.124 03:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.693 [2024-11-18 03:59:38.086236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.693 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.693 "name": "Existed_Raid", 00:10:41.693 "aliases": [ 00:10:41.693 "87f47518-3758-46ed-ba91-76dc24062f16" 00:10:41.693 ], 00:10:41.693 "product_name": "Raid Volume", 00:10:41.693 "block_size": 512, 00:10:41.693 "num_blocks": 262144, 00:10:41.693 "uuid": "87f47518-3758-46ed-ba91-76dc24062f16", 00:10:41.693 "assigned_rate_limits": { 00:10:41.693 "rw_ios_per_sec": 0, 00:10:41.693 "rw_mbytes_per_sec": 0, 00:10:41.693 "r_mbytes_per_sec": 0, 00:10:41.693 "w_mbytes_per_sec": 0 00:10:41.693 }, 00:10:41.693 "claimed": false, 00:10:41.693 "zoned": false, 00:10:41.693 "supported_io_types": { 00:10:41.693 "read": true, 00:10:41.693 "write": true, 00:10:41.693 "unmap": true, 00:10:41.693 "flush": true, 00:10:41.693 "reset": true, 00:10:41.693 "nvme_admin": false, 00:10:41.693 "nvme_io": false, 00:10:41.693 "nvme_io_md": false, 00:10:41.693 "write_zeroes": true, 00:10:41.693 "zcopy": false, 00:10:41.693 "get_zone_info": false, 00:10:41.693 "zone_management": false, 00:10:41.693 "zone_append": false, 00:10:41.693 "compare": false, 00:10:41.693 "compare_and_write": false, 00:10:41.693 "abort": false, 00:10:41.693 "seek_hole": false, 00:10:41.693 "seek_data": false, 00:10:41.693 "copy": false, 00:10:41.693 "nvme_iov_md": false 00:10:41.693 }, 00:10:41.693 "memory_domains": [ 00:10:41.693 { 00:10:41.693 "dma_device_id": "system", 00:10:41.693 "dma_device_type": 1 00:10:41.693 }, 00:10:41.693 { 00:10:41.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.693 "dma_device_type": 2 00:10:41.693 }, 00:10:41.693 { 00:10:41.693 "dma_device_id": "system", 00:10:41.693 "dma_device_type": 1 00:10:41.693 }, 00:10:41.693 { 00:10:41.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.693 "dma_device_type": 2 00:10:41.693 }, 00:10:41.693 { 00:10:41.693 "dma_device_id": "system", 00:10:41.693 "dma_device_type": 1 00:10:41.693 }, 00:10:41.693 { 00:10:41.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.693 "dma_device_type": 2 00:10:41.693 }, 00:10:41.693 { 00:10:41.693 "dma_device_id": "system", 00:10:41.693 "dma_device_type": 1 00:10:41.693 }, 00:10:41.693 { 00:10:41.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.694 "dma_device_type": 2 00:10:41.694 } 00:10:41.694 ], 00:10:41.694 "driver_specific": { 00:10:41.694 "raid": { 00:10:41.694 "uuid": "87f47518-3758-46ed-ba91-76dc24062f16", 00:10:41.694 "strip_size_kb": 64, 00:10:41.694 "state": "online", 00:10:41.694 "raid_level": "raid0", 00:10:41.694 "superblock": false, 00:10:41.694 "num_base_bdevs": 4, 00:10:41.694 "num_base_bdevs_discovered": 4, 00:10:41.694 "num_base_bdevs_operational": 4, 00:10:41.694 "base_bdevs_list": [ 00:10:41.694 { 00:10:41.694 "name": "NewBaseBdev", 00:10:41.694 "uuid": "5286ebf7-08d3-4414-8931-ce1753bf2f8e", 00:10:41.694 "is_configured": true, 00:10:41.694 "data_offset": 0, 00:10:41.694 "data_size": 65536 00:10:41.694 }, 00:10:41.694 { 00:10:41.694 "name": "BaseBdev2", 00:10:41.694 "uuid": "dfce1e84-2994-43c5-8b48-9009f14fe903", 00:10:41.694 "is_configured": true, 00:10:41.694 "data_offset": 0, 00:10:41.694 "data_size": 65536 00:10:41.694 }, 00:10:41.694 { 00:10:41.694 "name": "BaseBdev3", 00:10:41.694 "uuid": "1f2f8d86-4789-4a86-9245-68e225f6c99f", 00:10:41.694 "is_configured": true, 00:10:41.694 "data_offset": 0, 00:10:41.694 "data_size": 65536 00:10:41.694 }, 00:10:41.694 { 00:10:41.694 "name": "BaseBdev4", 00:10:41.694 "uuid": "91ad0ef5-18a5-4995-8065-76f6786a0d3f", 00:10:41.694 "is_configured": true, 00:10:41.694 "data_offset": 0, 00:10:41.694 "data_size": 65536 00:10:41.694 } 00:10:41.694 ] 00:10:41.694 } 00:10:41.694 } 00:10:41.694 }' 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:41.694 BaseBdev2 00:10:41.694 BaseBdev3 00:10:41.694 BaseBdev4' 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.694 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.954 [2024-11-18 03:59:38.393248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.954 [2024-11-18 03:59:38.393375] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.954 [2024-11-18 03:59:38.393489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.954 [2024-11-18 03:59:38.393591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:41.954 [2024-11-18 03:59:38.393641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69356 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69356 ']' 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69356 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69356 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.954 killing process with pid 69356 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69356' 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69356 00:10:41.954 [2024-11-18 03:59:38.443131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.954 03:59:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69356 00:10:42.523 [2024-11-18 03:59:38.865393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.461 03:59:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:43.461 00:10:43.461 real 0m11.541s 00:10:43.461 user 0m18.104s 00:10:43.461 sys 0m2.110s 00:10:43.461 03:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.461 ************************************ 00:10:43.461 END TEST raid_state_function_test 00:10:43.461 ************************************ 00:10:43.461 03:59:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.720 03:59:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:43.720 03:59:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:43.720 03:59:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.720 03:59:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.720 ************************************ 00:10:43.720 START TEST raid_state_function_test_sb 00:10:43.720 ************************************ 00:10:43.720 03:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:43.720 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:43.720 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:43.720 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:43.720 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:43.720 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:43.720 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.720 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:43.720 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.720 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70027 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70027' 00:10:43.721 Process raid pid: 70027 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70027 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70027 ']' 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.721 03:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.721 [2024-11-18 03:59:40.227166] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:43.721 [2024-11-18 03:59:40.227356] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.981 [2024-11-18 03:59:40.406531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.981 [2024-11-18 03:59:40.545158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.241 [2024-11-18 03:59:40.778533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.241 [2024-11-18 03:59:40.778581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.502 [2024-11-18 03:59:41.056771] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.502 [2024-11-18 03:59:41.056944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.502 [2024-11-18 03:59:41.056974] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.502 [2024-11-18 03:59:41.056996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.502 [2024-11-18 03:59:41.057013] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.502 [2024-11-18 03:59:41.057033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.502 [2024-11-18 03:59:41.057049] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.502 [2024-11-18 03:59:41.057070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.502 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.502 "name": "Existed_Raid", 00:10:44.502 "uuid": "3a7c3b09-9606-458f-a038-48b73032b7fb", 00:10:44.502 "strip_size_kb": 64, 00:10:44.502 "state": "configuring", 00:10:44.502 "raid_level": "raid0", 00:10:44.502 "superblock": true, 00:10:44.502 "num_base_bdevs": 4, 00:10:44.502 "num_base_bdevs_discovered": 0, 00:10:44.502 "num_base_bdevs_operational": 4, 00:10:44.502 "base_bdevs_list": [ 00:10:44.502 { 00:10:44.502 "name": "BaseBdev1", 00:10:44.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.502 "is_configured": false, 00:10:44.502 "data_offset": 0, 00:10:44.502 "data_size": 0 00:10:44.502 }, 00:10:44.502 { 00:10:44.502 "name": "BaseBdev2", 00:10:44.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.502 "is_configured": false, 00:10:44.502 "data_offset": 0, 00:10:44.502 "data_size": 0 00:10:44.502 }, 00:10:44.502 { 00:10:44.502 "name": "BaseBdev3", 00:10:44.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.502 "is_configured": false, 00:10:44.502 "data_offset": 0, 00:10:44.502 "data_size": 0 00:10:44.502 }, 00:10:44.502 { 00:10:44.502 "name": "BaseBdev4", 00:10:44.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.503 "is_configured": false, 00:10:44.503 "data_offset": 0, 00:10:44.503 "data_size": 0 00:10:44.503 } 00:10:44.503 ] 00:10:44.503 }' 00:10:44.503 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.503 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.072 [2024-11-18 03:59:41.508037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.072 [2024-11-18 03:59:41.508181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.072 [2024-11-18 03:59:41.519956] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.072 [2024-11-18 03:59:41.520036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.072 [2024-11-18 03:59:41.520063] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.072 [2024-11-18 03:59:41.520086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.072 [2024-11-18 03:59:41.520104] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.072 [2024-11-18 03:59:41.520124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.072 [2024-11-18 03:59:41.520141] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.072 [2024-11-18 03:59:41.520162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.072 [2024-11-18 03:59:41.574723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.072 BaseBdev1 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.072 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.072 [ 00:10:45.072 { 00:10:45.072 "name": "BaseBdev1", 00:10:45.072 "aliases": [ 00:10:45.072 "c8f5e6cb-17c9-4358-851b-700d2df5b8f7" 00:10:45.072 ], 00:10:45.072 "product_name": "Malloc disk", 00:10:45.072 "block_size": 512, 00:10:45.072 "num_blocks": 65536, 00:10:45.072 "uuid": "c8f5e6cb-17c9-4358-851b-700d2df5b8f7", 00:10:45.072 "assigned_rate_limits": { 00:10:45.072 "rw_ios_per_sec": 0, 00:10:45.072 "rw_mbytes_per_sec": 0, 00:10:45.072 "r_mbytes_per_sec": 0, 00:10:45.072 "w_mbytes_per_sec": 0 00:10:45.072 }, 00:10:45.072 "claimed": true, 00:10:45.072 "claim_type": "exclusive_write", 00:10:45.072 "zoned": false, 00:10:45.072 "supported_io_types": { 00:10:45.072 "read": true, 00:10:45.072 "write": true, 00:10:45.072 "unmap": true, 00:10:45.072 "flush": true, 00:10:45.072 "reset": true, 00:10:45.073 "nvme_admin": false, 00:10:45.073 "nvme_io": false, 00:10:45.073 "nvme_io_md": false, 00:10:45.073 "write_zeroes": true, 00:10:45.073 "zcopy": true, 00:10:45.073 "get_zone_info": false, 00:10:45.073 "zone_management": false, 00:10:45.073 "zone_append": false, 00:10:45.073 "compare": false, 00:10:45.073 "compare_and_write": false, 00:10:45.073 "abort": true, 00:10:45.073 "seek_hole": false, 00:10:45.073 "seek_data": false, 00:10:45.073 "copy": true, 00:10:45.073 "nvme_iov_md": false 00:10:45.073 }, 00:10:45.073 "memory_domains": [ 00:10:45.073 { 00:10:45.073 "dma_device_id": "system", 00:10:45.073 "dma_device_type": 1 00:10:45.073 }, 00:10:45.073 { 00:10:45.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.073 "dma_device_type": 2 00:10:45.073 } 00:10:45.073 ], 00:10:45.073 "driver_specific": {} 00:10:45.073 } 00:10:45.073 ] 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.073 "name": "Existed_Raid", 00:10:45.073 "uuid": "94e97847-1691-441c-ab87-23b48413a2e8", 00:10:45.073 "strip_size_kb": 64, 00:10:45.073 "state": "configuring", 00:10:45.073 "raid_level": "raid0", 00:10:45.073 "superblock": true, 00:10:45.073 "num_base_bdevs": 4, 00:10:45.073 "num_base_bdevs_discovered": 1, 00:10:45.073 "num_base_bdevs_operational": 4, 00:10:45.073 "base_bdevs_list": [ 00:10:45.073 { 00:10:45.073 "name": "BaseBdev1", 00:10:45.073 "uuid": "c8f5e6cb-17c9-4358-851b-700d2df5b8f7", 00:10:45.073 "is_configured": true, 00:10:45.073 "data_offset": 2048, 00:10:45.073 "data_size": 63488 00:10:45.073 }, 00:10:45.073 { 00:10:45.073 "name": "BaseBdev2", 00:10:45.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.073 "is_configured": false, 00:10:45.073 "data_offset": 0, 00:10:45.073 "data_size": 0 00:10:45.073 }, 00:10:45.073 { 00:10:45.073 "name": "BaseBdev3", 00:10:45.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.073 "is_configured": false, 00:10:45.073 "data_offset": 0, 00:10:45.073 "data_size": 0 00:10:45.073 }, 00:10:45.073 { 00:10:45.073 "name": "BaseBdev4", 00:10:45.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.073 "is_configured": false, 00:10:45.073 "data_offset": 0, 00:10:45.073 "data_size": 0 00:10:45.073 } 00:10:45.073 ] 00:10:45.073 }' 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.073 03:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.642 [2024-11-18 03:59:42.042020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.642 [2024-11-18 03:59:42.042181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.642 [2024-11-18 03:59:42.054052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.642 [2024-11-18 03:59:42.056154] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.642 [2024-11-18 03:59:42.056233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.642 [2024-11-18 03:59:42.056263] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.642 [2024-11-18 03:59:42.056287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.642 [2024-11-18 03:59:42.056304] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.642 [2024-11-18 03:59:42.056324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.642 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.642 "name": "Existed_Raid", 00:10:45.642 "uuid": "426de43d-e70b-4502-8d53-97cf92cdf44a", 00:10:45.642 "strip_size_kb": 64, 00:10:45.642 "state": "configuring", 00:10:45.642 "raid_level": "raid0", 00:10:45.642 "superblock": true, 00:10:45.642 "num_base_bdevs": 4, 00:10:45.642 "num_base_bdevs_discovered": 1, 00:10:45.642 "num_base_bdevs_operational": 4, 00:10:45.642 "base_bdevs_list": [ 00:10:45.642 { 00:10:45.642 "name": "BaseBdev1", 00:10:45.642 "uuid": "c8f5e6cb-17c9-4358-851b-700d2df5b8f7", 00:10:45.642 "is_configured": true, 00:10:45.642 "data_offset": 2048, 00:10:45.642 "data_size": 63488 00:10:45.642 }, 00:10:45.642 { 00:10:45.642 "name": "BaseBdev2", 00:10:45.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.642 "is_configured": false, 00:10:45.642 "data_offset": 0, 00:10:45.642 "data_size": 0 00:10:45.642 }, 00:10:45.642 { 00:10:45.642 "name": "BaseBdev3", 00:10:45.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.643 "is_configured": false, 00:10:45.643 "data_offset": 0, 00:10:45.643 "data_size": 0 00:10:45.643 }, 00:10:45.643 { 00:10:45.643 "name": "BaseBdev4", 00:10:45.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.643 "is_configured": false, 00:10:45.643 "data_offset": 0, 00:10:45.643 "data_size": 0 00:10:45.643 } 00:10:45.643 ] 00:10:45.643 }' 00:10:45.643 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.643 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.902 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:45.902 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.902 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.902 [2024-11-18 03:59:42.540901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.902 BaseBdev2 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.162 [ 00:10:46.162 { 00:10:46.162 "name": "BaseBdev2", 00:10:46.162 "aliases": [ 00:10:46.162 "14af8608-af9e-4003-94a5-e08d5f1e1da5" 00:10:46.162 ], 00:10:46.162 "product_name": "Malloc disk", 00:10:46.162 "block_size": 512, 00:10:46.162 "num_blocks": 65536, 00:10:46.162 "uuid": "14af8608-af9e-4003-94a5-e08d5f1e1da5", 00:10:46.162 "assigned_rate_limits": { 00:10:46.162 "rw_ios_per_sec": 0, 00:10:46.162 "rw_mbytes_per_sec": 0, 00:10:46.162 "r_mbytes_per_sec": 0, 00:10:46.162 "w_mbytes_per_sec": 0 00:10:46.162 }, 00:10:46.162 "claimed": true, 00:10:46.162 "claim_type": "exclusive_write", 00:10:46.162 "zoned": false, 00:10:46.162 "supported_io_types": { 00:10:46.162 "read": true, 00:10:46.162 "write": true, 00:10:46.162 "unmap": true, 00:10:46.162 "flush": true, 00:10:46.162 "reset": true, 00:10:46.162 "nvme_admin": false, 00:10:46.162 "nvme_io": false, 00:10:46.162 "nvme_io_md": false, 00:10:46.162 "write_zeroes": true, 00:10:46.162 "zcopy": true, 00:10:46.162 "get_zone_info": false, 00:10:46.162 "zone_management": false, 00:10:46.162 "zone_append": false, 00:10:46.162 "compare": false, 00:10:46.162 "compare_and_write": false, 00:10:46.162 "abort": true, 00:10:46.162 "seek_hole": false, 00:10:46.162 "seek_data": false, 00:10:46.162 "copy": true, 00:10:46.162 "nvme_iov_md": false 00:10:46.162 }, 00:10:46.162 "memory_domains": [ 00:10:46.162 { 00:10:46.162 "dma_device_id": "system", 00:10:46.162 "dma_device_type": 1 00:10:46.162 }, 00:10:46.162 { 00:10:46.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.162 "dma_device_type": 2 00:10:46.162 } 00:10:46.162 ], 00:10:46.162 "driver_specific": {} 00:10:46.162 } 00:10:46.162 ] 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.162 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.162 "name": "Existed_Raid", 00:10:46.162 "uuid": "426de43d-e70b-4502-8d53-97cf92cdf44a", 00:10:46.162 "strip_size_kb": 64, 00:10:46.162 "state": "configuring", 00:10:46.162 "raid_level": "raid0", 00:10:46.162 "superblock": true, 00:10:46.162 "num_base_bdevs": 4, 00:10:46.162 "num_base_bdevs_discovered": 2, 00:10:46.162 "num_base_bdevs_operational": 4, 00:10:46.162 "base_bdevs_list": [ 00:10:46.162 { 00:10:46.162 "name": "BaseBdev1", 00:10:46.162 "uuid": "c8f5e6cb-17c9-4358-851b-700d2df5b8f7", 00:10:46.162 "is_configured": true, 00:10:46.162 "data_offset": 2048, 00:10:46.162 "data_size": 63488 00:10:46.162 }, 00:10:46.162 { 00:10:46.162 "name": "BaseBdev2", 00:10:46.162 "uuid": "14af8608-af9e-4003-94a5-e08d5f1e1da5", 00:10:46.162 "is_configured": true, 00:10:46.162 "data_offset": 2048, 00:10:46.162 "data_size": 63488 00:10:46.162 }, 00:10:46.162 { 00:10:46.162 "name": "BaseBdev3", 00:10:46.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.163 "is_configured": false, 00:10:46.163 "data_offset": 0, 00:10:46.163 "data_size": 0 00:10:46.163 }, 00:10:46.163 { 00:10:46.163 "name": "BaseBdev4", 00:10:46.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.163 "is_configured": false, 00:10:46.163 "data_offset": 0, 00:10:46.163 "data_size": 0 00:10:46.163 } 00:10:46.163 ] 00:10:46.163 }' 00:10:46.163 03:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.163 03:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.423 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.423 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.423 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.683 [2024-11-18 03:59:43.089126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.683 BaseBdev3 00:10:46.683 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.683 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:46.683 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.684 [ 00:10:46.684 { 00:10:46.684 "name": "BaseBdev3", 00:10:46.684 "aliases": [ 00:10:46.684 "34dd01df-81ab-4d0e-9f84-52af975ad998" 00:10:46.684 ], 00:10:46.684 "product_name": "Malloc disk", 00:10:46.684 "block_size": 512, 00:10:46.684 "num_blocks": 65536, 00:10:46.684 "uuid": "34dd01df-81ab-4d0e-9f84-52af975ad998", 00:10:46.684 "assigned_rate_limits": { 00:10:46.684 "rw_ios_per_sec": 0, 00:10:46.684 "rw_mbytes_per_sec": 0, 00:10:46.684 "r_mbytes_per_sec": 0, 00:10:46.684 "w_mbytes_per_sec": 0 00:10:46.684 }, 00:10:46.684 "claimed": true, 00:10:46.684 "claim_type": "exclusive_write", 00:10:46.684 "zoned": false, 00:10:46.684 "supported_io_types": { 00:10:46.684 "read": true, 00:10:46.684 "write": true, 00:10:46.684 "unmap": true, 00:10:46.684 "flush": true, 00:10:46.684 "reset": true, 00:10:46.684 "nvme_admin": false, 00:10:46.684 "nvme_io": false, 00:10:46.684 "nvme_io_md": false, 00:10:46.684 "write_zeroes": true, 00:10:46.684 "zcopy": true, 00:10:46.684 "get_zone_info": false, 00:10:46.684 "zone_management": false, 00:10:46.684 "zone_append": false, 00:10:46.684 "compare": false, 00:10:46.684 "compare_and_write": false, 00:10:46.684 "abort": true, 00:10:46.684 "seek_hole": false, 00:10:46.684 "seek_data": false, 00:10:46.684 "copy": true, 00:10:46.684 "nvme_iov_md": false 00:10:46.684 }, 00:10:46.684 "memory_domains": [ 00:10:46.684 { 00:10:46.684 "dma_device_id": "system", 00:10:46.684 "dma_device_type": 1 00:10:46.684 }, 00:10:46.684 { 00:10:46.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.684 "dma_device_type": 2 00:10:46.684 } 00:10:46.684 ], 00:10:46.684 "driver_specific": {} 00:10:46.684 } 00:10:46.684 ] 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.684 "name": "Existed_Raid", 00:10:46.684 "uuid": "426de43d-e70b-4502-8d53-97cf92cdf44a", 00:10:46.684 "strip_size_kb": 64, 00:10:46.684 "state": "configuring", 00:10:46.684 "raid_level": "raid0", 00:10:46.684 "superblock": true, 00:10:46.684 "num_base_bdevs": 4, 00:10:46.684 "num_base_bdevs_discovered": 3, 00:10:46.684 "num_base_bdevs_operational": 4, 00:10:46.684 "base_bdevs_list": [ 00:10:46.684 { 00:10:46.684 "name": "BaseBdev1", 00:10:46.684 "uuid": "c8f5e6cb-17c9-4358-851b-700d2df5b8f7", 00:10:46.684 "is_configured": true, 00:10:46.684 "data_offset": 2048, 00:10:46.684 "data_size": 63488 00:10:46.684 }, 00:10:46.684 { 00:10:46.684 "name": "BaseBdev2", 00:10:46.684 "uuid": "14af8608-af9e-4003-94a5-e08d5f1e1da5", 00:10:46.684 "is_configured": true, 00:10:46.684 "data_offset": 2048, 00:10:46.684 "data_size": 63488 00:10:46.684 }, 00:10:46.684 { 00:10:46.684 "name": "BaseBdev3", 00:10:46.684 "uuid": "34dd01df-81ab-4d0e-9f84-52af975ad998", 00:10:46.684 "is_configured": true, 00:10:46.684 "data_offset": 2048, 00:10:46.684 "data_size": 63488 00:10:46.684 }, 00:10:46.684 { 00:10:46.684 "name": "BaseBdev4", 00:10:46.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.684 "is_configured": false, 00:10:46.684 "data_offset": 0, 00:10:46.684 "data_size": 0 00:10:46.684 } 00:10:46.684 ] 00:10:46.684 }' 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.684 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.944 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:46.944 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.944 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.205 [2024-11-18 03:59:43.593452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.205 [2024-11-18 03:59:43.593895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:47.205 BaseBdev4 00:10:47.205 [2024-11-18 03:59:43.593950] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.205 [2024-11-18 03:59:43.594272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:47.205 [2024-11-18 03:59:43.594442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:47.205 [2024-11-18 03:59:43.594457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:47.205 [2024-11-18 03:59:43.594621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.205 [ 00:10:47.205 { 00:10:47.205 "name": "BaseBdev4", 00:10:47.205 "aliases": [ 00:10:47.205 "abf37009-794c-4051-9436-20fcfecd9230" 00:10:47.205 ], 00:10:47.205 "product_name": "Malloc disk", 00:10:47.205 "block_size": 512, 00:10:47.205 "num_blocks": 65536, 00:10:47.205 "uuid": "abf37009-794c-4051-9436-20fcfecd9230", 00:10:47.205 "assigned_rate_limits": { 00:10:47.205 "rw_ios_per_sec": 0, 00:10:47.205 "rw_mbytes_per_sec": 0, 00:10:47.205 "r_mbytes_per_sec": 0, 00:10:47.205 "w_mbytes_per_sec": 0 00:10:47.205 }, 00:10:47.205 "claimed": true, 00:10:47.205 "claim_type": "exclusive_write", 00:10:47.205 "zoned": false, 00:10:47.205 "supported_io_types": { 00:10:47.205 "read": true, 00:10:47.205 "write": true, 00:10:47.205 "unmap": true, 00:10:47.205 "flush": true, 00:10:47.205 "reset": true, 00:10:47.205 "nvme_admin": false, 00:10:47.205 "nvme_io": false, 00:10:47.205 "nvme_io_md": false, 00:10:47.205 "write_zeroes": true, 00:10:47.205 "zcopy": true, 00:10:47.205 "get_zone_info": false, 00:10:47.205 "zone_management": false, 00:10:47.205 "zone_append": false, 00:10:47.205 "compare": false, 00:10:47.205 "compare_and_write": false, 00:10:47.205 "abort": true, 00:10:47.205 "seek_hole": false, 00:10:47.205 "seek_data": false, 00:10:47.205 "copy": true, 00:10:47.205 "nvme_iov_md": false 00:10:47.205 }, 00:10:47.205 "memory_domains": [ 00:10:47.205 { 00:10:47.205 "dma_device_id": "system", 00:10:47.205 "dma_device_type": 1 00:10:47.205 }, 00:10:47.205 { 00:10:47.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.205 "dma_device_type": 2 00:10:47.205 } 00:10:47.205 ], 00:10:47.205 "driver_specific": {} 00:10:47.205 } 00:10:47.205 ] 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.205 "name": "Existed_Raid", 00:10:47.205 "uuid": "426de43d-e70b-4502-8d53-97cf92cdf44a", 00:10:47.205 "strip_size_kb": 64, 00:10:47.205 "state": "online", 00:10:47.205 "raid_level": "raid0", 00:10:47.205 "superblock": true, 00:10:47.205 "num_base_bdevs": 4, 00:10:47.205 "num_base_bdevs_discovered": 4, 00:10:47.205 "num_base_bdevs_operational": 4, 00:10:47.205 "base_bdevs_list": [ 00:10:47.205 { 00:10:47.205 "name": "BaseBdev1", 00:10:47.205 "uuid": "c8f5e6cb-17c9-4358-851b-700d2df5b8f7", 00:10:47.205 "is_configured": true, 00:10:47.205 "data_offset": 2048, 00:10:47.205 "data_size": 63488 00:10:47.205 }, 00:10:47.205 { 00:10:47.205 "name": "BaseBdev2", 00:10:47.205 "uuid": "14af8608-af9e-4003-94a5-e08d5f1e1da5", 00:10:47.205 "is_configured": true, 00:10:47.205 "data_offset": 2048, 00:10:47.205 "data_size": 63488 00:10:47.205 }, 00:10:47.205 { 00:10:47.205 "name": "BaseBdev3", 00:10:47.205 "uuid": "34dd01df-81ab-4d0e-9f84-52af975ad998", 00:10:47.205 "is_configured": true, 00:10:47.205 "data_offset": 2048, 00:10:47.205 "data_size": 63488 00:10:47.205 }, 00:10:47.205 { 00:10:47.205 "name": "BaseBdev4", 00:10:47.205 "uuid": "abf37009-794c-4051-9436-20fcfecd9230", 00:10:47.205 "is_configured": true, 00:10:47.205 "data_offset": 2048, 00:10:47.205 "data_size": 63488 00:10:47.205 } 00:10:47.205 ] 00:10:47.205 }' 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.205 03:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.466 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.466 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.466 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.466 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.466 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.466 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.466 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.466 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.466 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.466 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.466 [2024-11-18 03:59:44.089102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.726 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.726 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.726 "name": "Existed_Raid", 00:10:47.726 "aliases": [ 00:10:47.726 "426de43d-e70b-4502-8d53-97cf92cdf44a" 00:10:47.726 ], 00:10:47.726 "product_name": "Raid Volume", 00:10:47.726 "block_size": 512, 00:10:47.726 "num_blocks": 253952, 00:10:47.726 "uuid": "426de43d-e70b-4502-8d53-97cf92cdf44a", 00:10:47.726 "assigned_rate_limits": { 00:10:47.726 "rw_ios_per_sec": 0, 00:10:47.726 "rw_mbytes_per_sec": 0, 00:10:47.726 "r_mbytes_per_sec": 0, 00:10:47.726 "w_mbytes_per_sec": 0 00:10:47.726 }, 00:10:47.726 "claimed": false, 00:10:47.726 "zoned": false, 00:10:47.726 "supported_io_types": { 00:10:47.726 "read": true, 00:10:47.726 "write": true, 00:10:47.726 "unmap": true, 00:10:47.726 "flush": true, 00:10:47.726 "reset": true, 00:10:47.726 "nvme_admin": false, 00:10:47.726 "nvme_io": false, 00:10:47.726 "nvme_io_md": false, 00:10:47.726 "write_zeroes": true, 00:10:47.726 "zcopy": false, 00:10:47.726 "get_zone_info": false, 00:10:47.726 "zone_management": false, 00:10:47.726 "zone_append": false, 00:10:47.726 "compare": false, 00:10:47.726 "compare_and_write": false, 00:10:47.726 "abort": false, 00:10:47.726 "seek_hole": false, 00:10:47.726 "seek_data": false, 00:10:47.726 "copy": false, 00:10:47.726 "nvme_iov_md": false 00:10:47.726 }, 00:10:47.726 "memory_domains": [ 00:10:47.726 { 00:10:47.726 "dma_device_id": "system", 00:10:47.726 "dma_device_type": 1 00:10:47.726 }, 00:10:47.726 { 00:10:47.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.726 "dma_device_type": 2 00:10:47.726 }, 00:10:47.726 { 00:10:47.726 "dma_device_id": "system", 00:10:47.726 "dma_device_type": 1 00:10:47.726 }, 00:10:47.726 { 00:10:47.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.726 "dma_device_type": 2 00:10:47.726 }, 00:10:47.726 { 00:10:47.726 "dma_device_id": "system", 00:10:47.726 "dma_device_type": 1 00:10:47.726 }, 00:10:47.726 { 00:10:47.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.726 "dma_device_type": 2 00:10:47.726 }, 00:10:47.726 { 00:10:47.726 "dma_device_id": "system", 00:10:47.726 "dma_device_type": 1 00:10:47.726 }, 00:10:47.726 { 00:10:47.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.726 "dma_device_type": 2 00:10:47.726 } 00:10:47.726 ], 00:10:47.726 "driver_specific": { 00:10:47.726 "raid": { 00:10:47.726 "uuid": "426de43d-e70b-4502-8d53-97cf92cdf44a", 00:10:47.726 "strip_size_kb": 64, 00:10:47.726 "state": "online", 00:10:47.726 "raid_level": "raid0", 00:10:47.726 "superblock": true, 00:10:47.726 "num_base_bdevs": 4, 00:10:47.726 "num_base_bdevs_discovered": 4, 00:10:47.726 "num_base_bdevs_operational": 4, 00:10:47.726 "base_bdevs_list": [ 00:10:47.726 { 00:10:47.726 "name": "BaseBdev1", 00:10:47.726 "uuid": "c8f5e6cb-17c9-4358-851b-700d2df5b8f7", 00:10:47.726 "is_configured": true, 00:10:47.726 "data_offset": 2048, 00:10:47.726 "data_size": 63488 00:10:47.726 }, 00:10:47.726 { 00:10:47.726 "name": "BaseBdev2", 00:10:47.726 "uuid": "14af8608-af9e-4003-94a5-e08d5f1e1da5", 00:10:47.726 "is_configured": true, 00:10:47.726 "data_offset": 2048, 00:10:47.726 "data_size": 63488 00:10:47.726 }, 00:10:47.726 { 00:10:47.726 "name": "BaseBdev3", 00:10:47.726 "uuid": "34dd01df-81ab-4d0e-9f84-52af975ad998", 00:10:47.726 "is_configured": true, 00:10:47.726 "data_offset": 2048, 00:10:47.726 "data_size": 63488 00:10:47.726 }, 00:10:47.726 { 00:10:47.726 "name": "BaseBdev4", 00:10:47.726 "uuid": "abf37009-794c-4051-9436-20fcfecd9230", 00:10:47.726 "is_configured": true, 00:10:47.726 "data_offset": 2048, 00:10:47.726 "data_size": 63488 00:10:47.726 } 00:10:47.726 ] 00:10:47.726 } 00:10:47.726 } 00:10:47.726 }' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:47.727 BaseBdev2 00:10:47.727 BaseBdev3 00:10:47.727 BaseBdev4' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.727 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.987 [2024-11-18 03:59:44.372205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.987 [2024-11-18 03:59:44.372326] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.987 [2024-11-18 03:59:44.372410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.987 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.988 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.988 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.988 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.988 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.988 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.988 "name": "Existed_Raid", 00:10:47.988 "uuid": "426de43d-e70b-4502-8d53-97cf92cdf44a", 00:10:47.988 "strip_size_kb": 64, 00:10:47.988 "state": "offline", 00:10:47.988 "raid_level": "raid0", 00:10:47.988 "superblock": true, 00:10:47.988 "num_base_bdevs": 4, 00:10:47.988 "num_base_bdevs_discovered": 3, 00:10:47.988 "num_base_bdevs_operational": 3, 00:10:47.988 "base_bdevs_list": [ 00:10:47.988 { 00:10:47.988 "name": null, 00:10:47.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.988 "is_configured": false, 00:10:47.988 "data_offset": 0, 00:10:47.988 "data_size": 63488 00:10:47.988 }, 00:10:47.988 { 00:10:47.988 "name": "BaseBdev2", 00:10:47.988 "uuid": "14af8608-af9e-4003-94a5-e08d5f1e1da5", 00:10:47.988 "is_configured": true, 00:10:47.988 "data_offset": 2048, 00:10:47.988 "data_size": 63488 00:10:47.988 }, 00:10:47.988 { 00:10:47.988 "name": "BaseBdev3", 00:10:47.988 "uuid": "34dd01df-81ab-4d0e-9f84-52af975ad998", 00:10:47.988 "is_configured": true, 00:10:47.988 "data_offset": 2048, 00:10:47.988 "data_size": 63488 00:10:47.988 }, 00:10:47.988 { 00:10:47.988 "name": "BaseBdev4", 00:10:47.988 "uuid": "abf37009-794c-4051-9436-20fcfecd9230", 00:10:47.988 "is_configured": true, 00:10:47.988 "data_offset": 2048, 00:10:47.988 "data_size": 63488 00:10:47.988 } 00:10:47.988 ] 00:10:47.988 }' 00:10:47.988 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.988 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.558 03:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.558 [2024-11-18 03:59:44.977075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.558 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.558 [2024-11-18 03:59:45.122108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.819 [2024-11-18 03:59:45.279868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:48.819 [2024-11-18 03:59:45.280011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.819 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.080 BaseBdev2 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.080 [ 00:10:49.080 { 00:10:49.080 "name": "BaseBdev2", 00:10:49.080 "aliases": [ 00:10:49.080 "26b981bc-4fd9-426d-a02d-ce4f6b4a0c4b" 00:10:49.080 ], 00:10:49.080 "product_name": "Malloc disk", 00:10:49.080 "block_size": 512, 00:10:49.080 "num_blocks": 65536, 00:10:49.080 "uuid": "26b981bc-4fd9-426d-a02d-ce4f6b4a0c4b", 00:10:49.080 "assigned_rate_limits": { 00:10:49.080 "rw_ios_per_sec": 0, 00:10:49.080 "rw_mbytes_per_sec": 0, 00:10:49.080 "r_mbytes_per_sec": 0, 00:10:49.080 "w_mbytes_per_sec": 0 00:10:49.080 }, 00:10:49.080 "claimed": false, 00:10:49.080 "zoned": false, 00:10:49.080 "supported_io_types": { 00:10:49.080 "read": true, 00:10:49.080 "write": true, 00:10:49.080 "unmap": true, 00:10:49.080 "flush": true, 00:10:49.080 "reset": true, 00:10:49.080 "nvme_admin": false, 00:10:49.080 "nvme_io": false, 00:10:49.080 "nvme_io_md": false, 00:10:49.080 "write_zeroes": true, 00:10:49.080 "zcopy": true, 00:10:49.080 "get_zone_info": false, 00:10:49.080 "zone_management": false, 00:10:49.080 "zone_append": false, 00:10:49.080 "compare": false, 00:10:49.080 "compare_and_write": false, 00:10:49.080 "abort": true, 00:10:49.080 "seek_hole": false, 00:10:49.080 "seek_data": false, 00:10:49.080 "copy": true, 00:10:49.080 "nvme_iov_md": false 00:10:49.080 }, 00:10:49.080 "memory_domains": [ 00:10:49.080 { 00:10:49.080 "dma_device_id": "system", 00:10:49.080 "dma_device_type": 1 00:10:49.080 }, 00:10:49.080 { 00:10:49.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.080 "dma_device_type": 2 00:10:49.080 } 00:10:49.080 ], 00:10:49.080 "driver_specific": {} 00:10:49.080 } 00:10:49.080 ] 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.080 BaseBdev3 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.080 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.081 [ 00:10:49.081 { 00:10:49.081 "name": "BaseBdev3", 00:10:49.081 "aliases": [ 00:10:49.081 "047328ee-0061-4cdd-96e8-041de75fa575" 00:10:49.081 ], 00:10:49.081 "product_name": "Malloc disk", 00:10:49.081 "block_size": 512, 00:10:49.081 "num_blocks": 65536, 00:10:49.081 "uuid": "047328ee-0061-4cdd-96e8-041de75fa575", 00:10:49.081 "assigned_rate_limits": { 00:10:49.081 "rw_ios_per_sec": 0, 00:10:49.081 "rw_mbytes_per_sec": 0, 00:10:49.081 "r_mbytes_per_sec": 0, 00:10:49.081 "w_mbytes_per_sec": 0 00:10:49.081 }, 00:10:49.081 "claimed": false, 00:10:49.081 "zoned": false, 00:10:49.081 "supported_io_types": { 00:10:49.081 "read": true, 00:10:49.081 "write": true, 00:10:49.081 "unmap": true, 00:10:49.081 "flush": true, 00:10:49.081 "reset": true, 00:10:49.081 "nvme_admin": false, 00:10:49.081 "nvme_io": false, 00:10:49.081 "nvme_io_md": false, 00:10:49.081 "write_zeroes": true, 00:10:49.081 "zcopy": true, 00:10:49.081 "get_zone_info": false, 00:10:49.081 "zone_management": false, 00:10:49.081 "zone_append": false, 00:10:49.081 "compare": false, 00:10:49.081 "compare_and_write": false, 00:10:49.081 "abort": true, 00:10:49.081 "seek_hole": false, 00:10:49.081 "seek_data": false, 00:10:49.081 "copy": true, 00:10:49.081 "nvme_iov_md": false 00:10:49.081 }, 00:10:49.081 "memory_domains": [ 00:10:49.081 { 00:10:49.081 "dma_device_id": "system", 00:10:49.081 "dma_device_type": 1 00:10:49.081 }, 00:10:49.081 { 00:10:49.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.081 "dma_device_type": 2 00:10:49.081 } 00:10:49.081 ], 00:10:49.081 "driver_specific": {} 00:10:49.081 } 00:10:49.081 ] 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.081 BaseBdev4 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.081 [ 00:10:49.081 { 00:10:49.081 "name": "BaseBdev4", 00:10:49.081 "aliases": [ 00:10:49.081 "2ac1b7fa-60c1-4ad7-aede-a7b9caa10afb" 00:10:49.081 ], 00:10:49.081 "product_name": "Malloc disk", 00:10:49.081 "block_size": 512, 00:10:49.081 "num_blocks": 65536, 00:10:49.081 "uuid": "2ac1b7fa-60c1-4ad7-aede-a7b9caa10afb", 00:10:49.081 "assigned_rate_limits": { 00:10:49.081 "rw_ios_per_sec": 0, 00:10:49.081 "rw_mbytes_per_sec": 0, 00:10:49.081 "r_mbytes_per_sec": 0, 00:10:49.081 "w_mbytes_per_sec": 0 00:10:49.081 }, 00:10:49.081 "claimed": false, 00:10:49.081 "zoned": false, 00:10:49.081 "supported_io_types": { 00:10:49.081 "read": true, 00:10:49.081 "write": true, 00:10:49.081 "unmap": true, 00:10:49.081 "flush": true, 00:10:49.081 "reset": true, 00:10:49.081 "nvme_admin": false, 00:10:49.081 "nvme_io": false, 00:10:49.081 "nvme_io_md": false, 00:10:49.081 "write_zeroes": true, 00:10:49.081 "zcopy": true, 00:10:49.081 "get_zone_info": false, 00:10:49.081 "zone_management": false, 00:10:49.081 "zone_append": false, 00:10:49.081 "compare": false, 00:10:49.081 "compare_and_write": false, 00:10:49.081 "abort": true, 00:10:49.081 "seek_hole": false, 00:10:49.081 "seek_data": false, 00:10:49.081 "copy": true, 00:10:49.081 "nvme_iov_md": false 00:10:49.081 }, 00:10:49.081 "memory_domains": [ 00:10:49.081 { 00:10:49.081 "dma_device_id": "system", 00:10:49.081 "dma_device_type": 1 00:10:49.081 }, 00:10:49.081 { 00:10:49.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.081 "dma_device_type": 2 00:10:49.081 } 00:10:49.081 ], 00:10:49.081 "driver_specific": {} 00:10:49.081 } 00:10:49.081 ] 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.081 [2024-11-18 03:59:45.699227] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.081 [2024-11-18 03:59:45.699360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.081 [2024-11-18 03:59:45.699405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.081 [2024-11-18 03:59:45.701562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.081 [2024-11-18 03:59:45.701661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.081 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.341 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.341 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.341 "name": "Existed_Raid", 00:10:49.341 "uuid": "370641fe-e5f8-44cd-9020-cb05849b48b2", 00:10:49.341 "strip_size_kb": 64, 00:10:49.341 "state": "configuring", 00:10:49.341 "raid_level": "raid0", 00:10:49.341 "superblock": true, 00:10:49.341 "num_base_bdevs": 4, 00:10:49.341 "num_base_bdevs_discovered": 3, 00:10:49.341 "num_base_bdevs_operational": 4, 00:10:49.341 "base_bdevs_list": [ 00:10:49.341 { 00:10:49.341 "name": "BaseBdev1", 00:10:49.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.341 "is_configured": false, 00:10:49.341 "data_offset": 0, 00:10:49.341 "data_size": 0 00:10:49.341 }, 00:10:49.341 { 00:10:49.341 "name": "BaseBdev2", 00:10:49.341 "uuid": "26b981bc-4fd9-426d-a02d-ce4f6b4a0c4b", 00:10:49.341 "is_configured": true, 00:10:49.341 "data_offset": 2048, 00:10:49.341 "data_size": 63488 00:10:49.341 }, 00:10:49.341 { 00:10:49.341 "name": "BaseBdev3", 00:10:49.341 "uuid": "047328ee-0061-4cdd-96e8-041de75fa575", 00:10:49.341 "is_configured": true, 00:10:49.341 "data_offset": 2048, 00:10:49.341 "data_size": 63488 00:10:49.341 }, 00:10:49.341 { 00:10:49.342 "name": "BaseBdev4", 00:10:49.342 "uuid": "2ac1b7fa-60c1-4ad7-aede-a7b9caa10afb", 00:10:49.342 "is_configured": true, 00:10:49.342 "data_offset": 2048, 00:10:49.342 "data_size": 63488 00:10:49.342 } 00:10:49.342 ] 00:10:49.342 }' 00:10:49.342 03:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.342 03:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.601 [2024-11-18 03:59:46.142548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.601 "name": "Existed_Raid", 00:10:49.601 "uuid": "370641fe-e5f8-44cd-9020-cb05849b48b2", 00:10:49.601 "strip_size_kb": 64, 00:10:49.601 "state": "configuring", 00:10:49.601 "raid_level": "raid0", 00:10:49.601 "superblock": true, 00:10:49.601 "num_base_bdevs": 4, 00:10:49.601 "num_base_bdevs_discovered": 2, 00:10:49.601 "num_base_bdevs_operational": 4, 00:10:49.601 "base_bdevs_list": [ 00:10:49.601 { 00:10:49.601 "name": "BaseBdev1", 00:10:49.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.601 "is_configured": false, 00:10:49.601 "data_offset": 0, 00:10:49.601 "data_size": 0 00:10:49.601 }, 00:10:49.601 { 00:10:49.601 "name": null, 00:10:49.601 "uuid": "26b981bc-4fd9-426d-a02d-ce4f6b4a0c4b", 00:10:49.601 "is_configured": false, 00:10:49.601 "data_offset": 0, 00:10:49.601 "data_size": 63488 00:10:49.601 }, 00:10:49.601 { 00:10:49.601 "name": "BaseBdev3", 00:10:49.601 "uuid": "047328ee-0061-4cdd-96e8-041de75fa575", 00:10:49.601 "is_configured": true, 00:10:49.601 "data_offset": 2048, 00:10:49.601 "data_size": 63488 00:10:49.601 }, 00:10:49.601 { 00:10:49.601 "name": "BaseBdev4", 00:10:49.601 "uuid": "2ac1b7fa-60c1-4ad7-aede-a7b9caa10afb", 00:10:49.601 "is_configured": true, 00:10:49.601 "data_offset": 2048, 00:10:49.601 "data_size": 63488 00:10:49.601 } 00:10:49.601 ] 00:10:49.601 }' 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.601 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.171 [2024-11-18 03:59:46.616145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.171 BaseBdev1 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.171 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.172 [ 00:10:50.172 { 00:10:50.172 "name": "BaseBdev1", 00:10:50.172 "aliases": [ 00:10:50.172 "97b8a223-11b9-4ae4-a052-5c812e6c3c52" 00:10:50.172 ], 00:10:50.172 "product_name": "Malloc disk", 00:10:50.172 "block_size": 512, 00:10:50.172 "num_blocks": 65536, 00:10:50.172 "uuid": "97b8a223-11b9-4ae4-a052-5c812e6c3c52", 00:10:50.172 "assigned_rate_limits": { 00:10:50.172 "rw_ios_per_sec": 0, 00:10:50.172 "rw_mbytes_per_sec": 0, 00:10:50.172 "r_mbytes_per_sec": 0, 00:10:50.172 "w_mbytes_per_sec": 0 00:10:50.172 }, 00:10:50.172 "claimed": true, 00:10:50.172 "claim_type": "exclusive_write", 00:10:50.172 "zoned": false, 00:10:50.172 "supported_io_types": { 00:10:50.172 "read": true, 00:10:50.172 "write": true, 00:10:50.172 "unmap": true, 00:10:50.172 "flush": true, 00:10:50.172 "reset": true, 00:10:50.172 "nvme_admin": false, 00:10:50.172 "nvme_io": false, 00:10:50.172 "nvme_io_md": false, 00:10:50.172 "write_zeroes": true, 00:10:50.172 "zcopy": true, 00:10:50.172 "get_zone_info": false, 00:10:50.172 "zone_management": false, 00:10:50.172 "zone_append": false, 00:10:50.172 "compare": false, 00:10:50.172 "compare_and_write": false, 00:10:50.172 "abort": true, 00:10:50.172 "seek_hole": false, 00:10:50.172 "seek_data": false, 00:10:50.172 "copy": true, 00:10:50.172 "nvme_iov_md": false 00:10:50.172 }, 00:10:50.172 "memory_domains": [ 00:10:50.172 { 00:10:50.172 "dma_device_id": "system", 00:10:50.172 "dma_device_type": 1 00:10:50.172 }, 00:10:50.172 { 00:10:50.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.172 "dma_device_type": 2 00:10:50.172 } 00:10:50.172 ], 00:10:50.172 "driver_specific": {} 00:10:50.172 } 00:10:50.172 ] 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.172 "name": "Existed_Raid", 00:10:50.172 "uuid": "370641fe-e5f8-44cd-9020-cb05849b48b2", 00:10:50.172 "strip_size_kb": 64, 00:10:50.172 "state": "configuring", 00:10:50.172 "raid_level": "raid0", 00:10:50.172 "superblock": true, 00:10:50.172 "num_base_bdevs": 4, 00:10:50.172 "num_base_bdevs_discovered": 3, 00:10:50.172 "num_base_bdevs_operational": 4, 00:10:50.172 "base_bdevs_list": [ 00:10:50.172 { 00:10:50.172 "name": "BaseBdev1", 00:10:50.172 "uuid": "97b8a223-11b9-4ae4-a052-5c812e6c3c52", 00:10:50.172 "is_configured": true, 00:10:50.172 "data_offset": 2048, 00:10:50.172 "data_size": 63488 00:10:50.172 }, 00:10:50.172 { 00:10:50.172 "name": null, 00:10:50.172 "uuid": "26b981bc-4fd9-426d-a02d-ce4f6b4a0c4b", 00:10:50.172 "is_configured": false, 00:10:50.172 "data_offset": 0, 00:10:50.172 "data_size": 63488 00:10:50.172 }, 00:10:50.172 { 00:10:50.172 "name": "BaseBdev3", 00:10:50.172 "uuid": "047328ee-0061-4cdd-96e8-041de75fa575", 00:10:50.172 "is_configured": true, 00:10:50.172 "data_offset": 2048, 00:10:50.172 "data_size": 63488 00:10:50.172 }, 00:10:50.172 { 00:10:50.172 "name": "BaseBdev4", 00:10:50.172 "uuid": "2ac1b7fa-60c1-4ad7-aede-a7b9caa10afb", 00:10:50.172 "is_configured": true, 00:10:50.172 "data_offset": 2048, 00:10:50.172 "data_size": 63488 00:10:50.172 } 00:10:50.172 ] 00:10:50.172 }' 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.172 03:59:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.743 [2024-11-18 03:59:47.135354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.743 "name": "Existed_Raid", 00:10:50.743 "uuid": "370641fe-e5f8-44cd-9020-cb05849b48b2", 00:10:50.743 "strip_size_kb": 64, 00:10:50.743 "state": "configuring", 00:10:50.743 "raid_level": "raid0", 00:10:50.743 "superblock": true, 00:10:50.743 "num_base_bdevs": 4, 00:10:50.743 "num_base_bdevs_discovered": 2, 00:10:50.743 "num_base_bdevs_operational": 4, 00:10:50.743 "base_bdevs_list": [ 00:10:50.743 { 00:10:50.743 "name": "BaseBdev1", 00:10:50.743 "uuid": "97b8a223-11b9-4ae4-a052-5c812e6c3c52", 00:10:50.743 "is_configured": true, 00:10:50.743 "data_offset": 2048, 00:10:50.743 "data_size": 63488 00:10:50.743 }, 00:10:50.743 { 00:10:50.743 "name": null, 00:10:50.743 "uuid": "26b981bc-4fd9-426d-a02d-ce4f6b4a0c4b", 00:10:50.743 "is_configured": false, 00:10:50.743 "data_offset": 0, 00:10:50.743 "data_size": 63488 00:10:50.743 }, 00:10:50.743 { 00:10:50.743 "name": null, 00:10:50.743 "uuid": "047328ee-0061-4cdd-96e8-041de75fa575", 00:10:50.743 "is_configured": false, 00:10:50.743 "data_offset": 0, 00:10:50.743 "data_size": 63488 00:10:50.743 }, 00:10:50.743 { 00:10:50.743 "name": "BaseBdev4", 00:10:50.743 "uuid": "2ac1b7fa-60c1-4ad7-aede-a7b9caa10afb", 00:10:50.743 "is_configured": true, 00:10:50.743 "data_offset": 2048, 00:10:50.743 "data_size": 63488 00:10:50.743 } 00:10:50.743 ] 00:10:50.743 }' 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.743 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.003 [2024-11-18 03:59:47.626544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.003 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.263 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.263 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.263 "name": "Existed_Raid", 00:10:51.263 "uuid": "370641fe-e5f8-44cd-9020-cb05849b48b2", 00:10:51.263 "strip_size_kb": 64, 00:10:51.263 "state": "configuring", 00:10:51.263 "raid_level": "raid0", 00:10:51.263 "superblock": true, 00:10:51.263 "num_base_bdevs": 4, 00:10:51.263 "num_base_bdevs_discovered": 3, 00:10:51.263 "num_base_bdevs_operational": 4, 00:10:51.263 "base_bdevs_list": [ 00:10:51.263 { 00:10:51.263 "name": "BaseBdev1", 00:10:51.263 "uuid": "97b8a223-11b9-4ae4-a052-5c812e6c3c52", 00:10:51.263 "is_configured": true, 00:10:51.263 "data_offset": 2048, 00:10:51.263 "data_size": 63488 00:10:51.263 }, 00:10:51.263 { 00:10:51.263 "name": null, 00:10:51.263 "uuid": "26b981bc-4fd9-426d-a02d-ce4f6b4a0c4b", 00:10:51.263 "is_configured": false, 00:10:51.263 "data_offset": 0, 00:10:51.263 "data_size": 63488 00:10:51.263 }, 00:10:51.263 { 00:10:51.263 "name": "BaseBdev3", 00:10:51.263 "uuid": "047328ee-0061-4cdd-96e8-041de75fa575", 00:10:51.263 "is_configured": true, 00:10:51.263 "data_offset": 2048, 00:10:51.263 "data_size": 63488 00:10:51.263 }, 00:10:51.263 { 00:10:51.263 "name": "BaseBdev4", 00:10:51.263 "uuid": "2ac1b7fa-60c1-4ad7-aede-a7b9caa10afb", 00:10:51.263 "is_configured": true, 00:10:51.263 "data_offset": 2048, 00:10:51.263 "data_size": 63488 00:10:51.263 } 00:10:51.263 ] 00:10:51.263 }' 00:10:51.263 03:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.263 03:59:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.522 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.523 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.523 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.523 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.523 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.523 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:51.523 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:51.523 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.523 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.523 [2024-11-18 03:59:48.101762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.783 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.783 "name": "Existed_Raid", 00:10:51.783 "uuid": "370641fe-e5f8-44cd-9020-cb05849b48b2", 00:10:51.783 "strip_size_kb": 64, 00:10:51.783 "state": "configuring", 00:10:51.783 "raid_level": "raid0", 00:10:51.783 "superblock": true, 00:10:51.783 "num_base_bdevs": 4, 00:10:51.783 "num_base_bdevs_discovered": 2, 00:10:51.783 "num_base_bdevs_operational": 4, 00:10:51.783 "base_bdevs_list": [ 00:10:51.783 { 00:10:51.783 "name": null, 00:10:51.783 "uuid": "97b8a223-11b9-4ae4-a052-5c812e6c3c52", 00:10:51.783 "is_configured": false, 00:10:51.783 "data_offset": 0, 00:10:51.783 "data_size": 63488 00:10:51.783 }, 00:10:51.783 { 00:10:51.783 "name": null, 00:10:51.783 "uuid": "26b981bc-4fd9-426d-a02d-ce4f6b4a0c4b", 00:10:51.784 "is_configured": false, 00:10:51.784 "data_offset": 0, 00:10:51.784 "data_size": 63488 00:10:51.784 }, 00:10:51.784 { 00:10:51.784 "name": "BaseBdev3", 00:10:51.784 "uuid": "047328ee-0061-4cdd-96e8-041de75fa575", 00:10:51.784 "is_configured": true, 00:10:51.784 "data_offset": 2048, 00:10:51.784 "data_size": 63488 00:10:51.784 }, 00:10:51.784 { 00:10:51.784 "name": "BaseBdev4", 00:10:51.784 "uuid": "2ac1b7fa-60c1-4ad7-aede-a7b9caa10afb", 00:10:51.784 "is_configured": true, 00:10:51.784 "data_offset": 2048, 00:10:51.784 "data_size": 63488 00:10:51.784 } 00:10:51.784 ] 00:10:51.784 }' 00:10:51.784 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.784 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.044 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.044 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.044 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.044 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.044 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.304 [2024-11-18 03:59:48.710209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.304 "name": "Existed_Raid", 00:10:52.304 "uuid": "370641fe-e5f8-44cd-9020-cb05849b48b2", 00:10:52.304 "strip_size_kb": 64, 00:10:52.304 "state": "configuring", 00:10:52.304 "raid_level": "raid0", 00:10:52.304 "superblock": true, 00:10:52.304 "num_base_bdevs": 4, 00:10:52.304 "num_base_bdevs_discovered": 3, 00:10:52.304 "num_base_bdevs_operational": 4, 00:10:52.304 "base_bdevs_list": [ 00:10:52.304 { 00:10:52.304 "name": null, 00:10:52.304 "uuid": "97b8a223-11b9-4ae4-a052-5c812e6c3c52", 00:10:52.304 "is_configured": false, 00:10:52.304 "data_offset": 0, 00:10:52.304 "data_size": 63488 00:10:52.304 }, 00:10:52.304 { 00:10:52.304 "name": "BaseBdev2", 00:10:52.304 "uuid": "26b981bc-4fd9-426d-a02d-ce4f6b4a0c4b", 00:10:52.304 "is_configured": true, 00:10:52.304 "data_offset": 2048, 00:10:52.304 "data_size": 63488 00:10:52.304 }, 00:10:52.304 { 00:10:52.304 "name": "BaseBdev3", 00:10:52.304 "uuid": "047328ee-0061-4cdd-96e8-041de75fa575", 00:10:52.304 "is_configured": true, 00:10:52.304 "data_offset": 2048, 00:10:52.304 "data_size": 63488 00:10:52.304 }, 00:10:52.304 { 00:10:52.304 "name": "BaseBdev4", 00:10:52.304 "uuid": "2ac1b7fa-60c1-4ad7-aede-a7b9caa10afb", 00:10:52.304 "is_configured": true, 00:10:52.304 "data_offset": 2048, 00:10:52.304 "data_size": 63488 00:10:52.304 } 00:10:52.304 ] 00:10:52.304 }' 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.304 03:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.564 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.564 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:52.564 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.564 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.564 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.823 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:52.823 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.823 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:52.823 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.823 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.823 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.823 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 97b8a223-11b9-4ae4-a052-5c812e6c3c52 00:10:52.823 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.823 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.823 [2024-11-18 03:59:49.299184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:52.823 [2024-11-18 03:59:49.299533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:52.823 [2024-11-18 03:59:49.299582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:52.823 [2024-11-18 03:59:49.299892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:52.823 [2024-11-18 03:59:49.300089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:52.823 NewBaseBdev 00:10:52.823 [2024-11-18 03:59:49.300139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:52.823 [2024-11-18 03:59:49.300325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.823 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.823 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.824 [ 00:10:52.824 { 00:10:52.824 "name": "NewBaseBdev", 00:10:52.824 "aliases": [ 00:10:52.824 "97b8a223-11b9-4ae4-a052-5c812e6c3c52" 00:10:52.824 ], 00:10:52.824 "product_name": "Malloc disk", 00:10:52.824 "block_size": 512, 00:10:52.824 "num_blocks": 65536, 00:10:52.824 "uuid": "97b8a223-11b9-4ae4-a052-5c812e6c3c52", 00:10:52.824 "assigned_rate_limits": { 00:10:52.824 "rw_ios_per_sec": 0, 00:10:52.824 "rw_mbytes_per_sec": 0, 00:10:52.824 "r_mbytes_per_sec": 0, 00:10:52.824 "w_mbytes_per_sec": 0 00:10:52.824 }, 00:10:52.824 "claimed": true, 00:10:52.824 "claim_type": "exclusive_write", 00:10:52.824 "zoned": false, 00:10:52.824 "supported_io_types": { 00:10:52.824 "read": true, 00:10:52.824 "write": true, 00:10:52.824 "unmap": true, 00:10:52.824 "flush": true, 00:10:52.824 "reset": true, 00:10:52.824 "nvme_admin": false, 00:10:52.824 "nvme_io": false, 00:10:52.824 "nvme_io_md": false, 00:10:52.824 "write_zeroes": true, 00:10:52.824 "zcopy": true, 00:10:52.824 "get_zone_info": false, 00:10:52.824 "zone_management": false, 00:10:52.824 "zone_append": false, 00:10:52.824 "compare": false, 00:10:52.824 "compare_and_write": false, 00:10:52.824 "abort": true, 00:10:52.824 "seek_hole": false, 00:10:52.824 "seek_data": false, 00:10:52.824 "copy": true, 00:10:52.824 "nvme_iov_md": false 00:10:52.824 }, 00:10:52.824 "memory_domains": [ 00:10:52.824 { 00:10:52.824 "dma_device_id": "system", 00:10:52.824 "dma_device_type": 1 00:10:52.824 }, 00:10:52.824 { 00:10:52.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.824 "dma_device_type": 2 00:10:52.824 } 00:10:52.824 ], 00:10:52.824 "driver_specific": {} 00:10:52.824 } 00:10:52.824 ] 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.824 "name": "Existed_Raid", 00:10:52.824 "uuid": "370641fe-e5f8-44cd-9020-cb05849b48b2", 00:10:52.824 "strip_size_kb": 64, 00:10:52.824 "state": "online", 00:10:52.824 "raid_level": "raid0", 00:10:52.824 "superblock": true, 00:10:52.824 "num_base_bdevs": 4, 00:10:52.824 "num_base_bdevs_discovered": 4, 00:10:52.824 "num_base_bdevs_operational": 4, 00:10:52.824 "base_bdevs_list": [ 00:10:52.824 { 00:10:52.824 "name": "NewBaseBdev", 00:10:52.824 "uuid": "97b8a223-11b9-4ae4-a052-5c812e6c3c52", 00:10:52.824 "is_configured": true, 00:10:52.824 "data_offset": 2048, 00:10:52.824 "data_size": 63488 00:10:52.824 }, 00:10:52.824 { 00:10:52.824 "name": "BaseBdev2", 00:10:52.824 "uuid": "26b981bc-4fd9-426d-a02d-ce4f6b4a0c4b", 00:10:52.824 "is_configured": true, 00:10:52.824 "data_offset": 2048, 00:10:52.824 "data_size": 63488 00:10:52.824 }, 00:10:52.824 { 00:10:52.824 "name": "BaseBdev3", 00:10:52.824 "uuid": "047328ee-0061-4cdd-96e8-041de75fa575", 00:10:52.824 "is_configured": true, 00:10:52.824 "data_offset": 2048, 00:10:52.824 "data_size": 63488 00:10:52.824 }, 00:10:52.824 { 00:10:52.824 "name": "BaseBdev4", 00:10:52.824 "uuid": "2ac1b7fa-60c1-4ad7-aede-a7b9caa10afb", 00:10:52.824 "is_configured": true, 00:10:52.824 "data_offset": 2048, 00:10:52.824 "data_size": 63488 00:10:52.824 } 00:10:52.824 ] 00:10:52.824 }' 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.824 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.394 [2024-11-18 03:59:49.798889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.394 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.394 "name": "Existed_Raid", 00:10:53.394 "aliases": [ 00:10:53.394 "370641fe-e5f8-44cd-9020-cb05849b48b2" 00:10:53.395 ], 00:10:53.395 "product_name": "Raid Volume", 00:10:53.395 "block_size": 512, 00:10:53.395 "num_blocks": 253952, 00:10:53.395 "uuid": "370641fe-e5f8-44cd-9020-cb05849b48b2", 00:10:53.395 "assigned_rate_limits": { 00:10:53.395 "rw_ios_per_sec": 0, 00:10:53.395 "rw_mbytes_per_sec": 0, 00:10:53.395 "r_mbytes_per_sec": 0, 00:10:53.395 "w_mbytes_per_sec": 0 00:10:53.395 }, 00:10:53.395 "claimed": false, 00:10:53.395 "zoned": false, 00:10:53.395 "supported_io_types": { 00:10:53.395 "read": true, 00:10:53.395 "write": true, 00:10:53.395 "unmap": true, 00:10:53.395 "flush": true, 00:10:53.395 "reset": true, 00:10:53.395 "nvme_admin": false, 00:10:53.395 "nvme_io": false, 00:10:53.395 "nvme_io_md": false, 00:10:53.395 "write_zeroes": true, 00:10:53.395 "zcopy": false, 00:10:53.395 "get_zone_info": false, 00:10:53.395 "zone_management": false, 00:10:53.395 "zone_append": false, 00:10:53.395 "compare": false, 00:10:53.395 "compare_and_write": false, 00:10:53.395 "abort": false, 00:10:53.395 "seek_hole": false, 00:10:53.395 "seek_data": false, 00:10:53.395 "copy": false, 00:10:53.395 "nvme_iov_md": false 00:10:53.395 }, 00:10:53.395 "memory_domains": [ 00:10:53.395 { 00:10:53.395 "dma_device_id": "system", 00:10:53.395 "dma_device_type": 1 00:10:53.395 }, 00:10:53.395 { 00:10:53.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.395 "dma_device_type": 2 00:10:53.395 }, 00:10:53.395 { 00:10:53.395 "dma_device_id": "system", 00:10:53.395 "dma_device_type": 1 00:10:53.395 }, 00:10:53.395 { 00:10:53.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.395 "dma_device_type": 2 00:10:53.395 }, 00:10:53.395 { 00:10:53.395 "dma_device_id": "system", 00:10:53.395 "dma_device_type": 1 00:10:53.395 }, 00:10:53.395 { 00:10:53.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.395 "dma_device_type": 2 00:10:53.395 }, 00:10:53.395 { 00:10:53.395 "dma_device_id": "system", 00:10:53.395 "dma_device_type": 1 00:10:53.395 }, 00:10:53.395 { 00:10:53.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.395 "dma_device_type": 2 00:10:53.395 } 00:10:53.395 ], 00:10:53.395 "driver_specific": { 00:10:53.395 "raid": { 00:10:53.395 "uuid": "370641fe-e5f8-44cd-9020-cb05849b48b2", 00:10:53.395 "strip_size_kb": 64, 00:10:53.395 "state": "online", 00:10:53.395 "raid_level": "raid0", 00:10:53.395 "superblock": true, 00:10:53.395 "num_base_bdevs": 4, 00:10:53.395 "num_base_bdevs_discovered": 4, 00:10:53.395 "num_base_bdevs_operational": 4, 00:10:53.395 "base_bdevs_list": [ 00:10:53.395 { 00:10:53.395 "name": "NewBaseBdev", 00:10:53.395 "uuid": "97b8a223-11b9-4ae4-a052-5c812e6c3c52", 00:10:53.395 "is_configured": true, 00:10:53.395 "data_offset": 2048, 00:10:53.395 "data_size": 63488 00:10:53.395 }, 00:10:53.395 { 00:10:53.395 "name": "BaseBdev2", 00:10:53.395 "uuid": "26b981bc-4fd9-426d-a02d-ce4f6b4a0c4b", 00:10:53.395 "is_configured": true, 00:10:53.395 "data_offset": 2048, 00:10:53.395 "data_size": 63488 00:10:53.395 }, 00:10:53.395 { 00:10:53.395 "name": "BaseBdev3", 00:10:53.395 "uuid": "047328ee-0061-4cdd-96e8-041de75fa575", 00:10:53.395 "is_configured": true, 00:10:53.395 "data_offset": 2048, 00:10:53.395 "data_size": 63488 00:10:53.395 }, 00:10:53.395 { 00:10:53.395 "name": "BaseBdev4", 00:10:53.395 "uuid": "2ac1b7fa-60c1-4ad7-aede-a7b9caa10afb", 00:10:53.395 "is_configured": true, 00:10:53.395 "data_offset": 2048, 00:10:53.395 "data_size": 63488 00:10:53.395 } 00:10:53.395 ] 00:10:53.395 } 00:10:53.395 } 00:10:53.395 }' 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:53.395 BaseBdev2 00:10:53.395 BaseBdev3 00:10:53.395 BaseBdev4' 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.395 03:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.395 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.656 [2024-11-18 03:59:50.145923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.656 [2024-11-18 03:59:50.146048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.656 [2024-11-18 03:59:50.146172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.656 [2024-11-18 03:59:50.146276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.656 [2024-11-18 03:59:50.146323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70027 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70027 ']' 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70027 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70027 00:10:53.656 killing process with pid 70027 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70027' 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70027 00:10:53.656 [2024-11-18 03:59:50.197201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.656 03:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70027 00:10:54.226 [2024-11-18 03:59:50.615020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.164 03:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:55.164 00:10:55.164 real 0m11.666s 00:10:55.164 user 0m18.309s 00:10:55.164 sys 0m2.218s 00:10:55.164 ************************************ 00:10:55.164 END TEST raid_state_function_test_sb 00:10:55.164 ************************************ 00:10:55.164 03:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.164 03:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.425 03:59:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:55.425 03:59:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.425 03:59:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.425 03:59:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.425 ************************************ 00:10:55.425 START TEST raid_superblock_test 00:10:55.425 ************************************ 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70712 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70712 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70712 ']' 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.425 03:59:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.425 [2024-11-18 03:59:51.958466] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:55.425 [2024-11-18 03:59:51.958585] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70712 ] 00:10:55.684 [2024-11-18 03:59:52.132524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.684 [2024-11-18 03:59:52.265641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.944 [2024-11-18 03:59:52.497594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.944 [2024-11-18 03:59:52.497663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.215 malloc1 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.215 [2024-11-18 03:59:52.846556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:56.215 [2024-11-18 03:59:52.846633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.215 [2024-11-18 03:59:52.846658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:56.215 [2024-11-18 03:59:52.846667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.215 [2024-11-18 03:59:52.849017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.215 [2024-11-18 03:59:52.849051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:56.215 pt1 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.215 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.476 malloc2 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.477 [2024-11-18 03:59:52.906704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:56.477 [2024-11-18 03:59:52.906759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.477 [2024-11-18 03:59:52.906782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:56.477 [2024-11-18 03:59:52.906791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.477 [2024-11-18 03:59:52.909138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.477 [2024-11-18 03:59:52.909169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:56.477 pt2 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.477 malloc3 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.477 [2024-11-18 03:59:52.974642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:56.477 [2024-11-18 03:59:52.974695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.477 [2024-11-18 03:59:52.974715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:56.477 [2024-11-18 03:59:52.974724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.477 [2024-11-18 03:59:52.977192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.477 [2024-11-18 03:59:52.977228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:56.477 pt3 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.477 03:59:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.477 malloc4 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.477 [2024-11-18 03:59:53.034998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:56.477 [2024-11-18 03:59:53.035050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.477 [2024-11-18 03:59:53.035068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:56.477 [2024-11-18 03:59:53.035077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.477 [2024-11-18 03:59:53.037371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.477 [2024-11-18 03:59:53.037402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:56.477 pt4 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.477 [2024-11-18 03:59:53.047025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:56.477 [2024-11-18 03:59:53.049083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.477 [2024-11-18 03:59:53.049154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:56.477 [2024-11-18 03:59:53.049211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:56.477 [2024-11-18 03:59:53.049380] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:56.477 [2024-11-18 03:59:53.049395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:56.477 [2024-11-18 03:59:53.049632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:56.477 [2024-11-18 03:59:53.049804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:56.477 [2024-11-18 03:59:53.049834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:56.477 [2024-11-18 03:59:53.049977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.477 "name": "raid_bdev1", 00:10:56.477 "uuid": "ec414845-07d9-490a-9524-5130a981044e", 00:10:56.477 "strip_size_kb": 64, 00:10:56.477 "state": "online", 00:10:56.477 "raid_level": "raid0", 00:10:56.477 "superblock": true, 00:10:56.477 "num_base_bdevs": 4, 00:10:56.477 "num_base_bdevs_discovered": 4, 00:10:56.477 "num_base_bdevs_operational": 4, 00:10:56.477 "base_bdevs_list": [ 00:10:56.477 { 00:10:56.477 "name": "pt1", 00:10:56.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.477 "is_configured": true, 00:10:56.477 "data_offset": 2048, 00:10:56.477 "data_size": 63488 00:10:56.477 }, 00:10:56.477 { 00:10:56.477 "name": "pt2", 00:10:56.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.477 "is_configured": true, 00:10:56.477 "data_offset": 2048, 00:10:56.477 "data_size": 63488 00:10:56.477 }, 00:10:56.477 { 00:10:56.477 "name": "pt3", 00:10:56.477 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.477 "is_configured": true, 00:10:56.477 "data_offset": 2048, 00:10:56.477 "data_size": 63488 00:10:56.477 }, 00:10:56.477 { 00:10:56.477 "name": "pt4", 00:10:56.477 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:56.477 "is_configured": true, 00:10:56.477 "data_offset": 2048, 00:10:56.477 "data_size": 63488 00:10:56.477 } 00:10:56.477 ] 00:10:56.477 }' 00:10:56.477 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.478 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.047 [2024-11-18 03:59:53.510582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.047 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.047 "name": "raid_bdev1", 00:10:57.047 "aliases": [ 00:10:57.047 "ec414845-07d9-490a-9524-5130a981044e" 00:10:57.047 ], 00:10:57.047 "product_name": "Raid Volume", 00:10:57.047 "block_size": 512, 00:10:57.047 "num_blocks": 253952, 00:10:57.047 "uuid": "ec414845-07d9-490a-9524-5130a981044e", 00:10:57.047 "assigned_rate_limits": { 00:10:57.047 "rw_ios_per_sec": 0, 00:10:57.047 "rw_mbytes_per_sec": 0, 00:10:57.047 "r_mbytes_per_sec": 0, 00:10:57.047 "w_mbytes_per_sec": 0 00:10:57.047 }, 00:10:57.047 "claimed": false, 00:10:57.047 "zoned": false, 00:10:57.047 "supported_io_types": { 00:10:57.047 "read": true, 00:10:57.047 "write": true, 00:10:57.047 "unmap": true, 00:10:57.047 "flush": true, 00:10:57.047 "reset": true, 00:10:57.047 "nvme_admin": false, 00:10:57.047 "nvme_io": false, 00:10:57.047 "nvme_io_md": false, 00:10:57.047 "write_zeroes": true, 00:10:57.047 "zcopy": false, 00:10:57.047 "get_zone_info": false, 00:10:57.047 "zone_management": false, 00:10:57.047 "zone_append": false, 00:10:57.047 "compare": false, 00:10:57.047 "compare_and_write": false, 00:10:57.047 "abort": false, 00:10:57.047 "seek_hole": false, 00:10:57.047 "seek_data": false, 00:10:57.047 "copy": false, 00:10:57.047 "nvme_iov_md": false 00:10:57.047 }, 00:10:57.047 "memory_domains": [ 00:10:57.047 { 00:10:57.047 "dma_device_id": "system", 00:10:57.048 "dma_device_type": 1 00:10:57.048 }, 00:10:57.048 { 00:10:57.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.048 "dma_device_type": 2 00:10:57.048 }, 00:10:57.048 { 00:10:57.048 "dma_device_id": "system", 00:10:57.048 "dma_device_type": 1 00:10:57.048 }, 00:10:57.048 { 00:10:57.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.048 "dma_device_type": 2 00:10:57.048 }, 00:10:57.048 { 00:10:57.048 "dma_device_id": "system", 00:10:57.048 "dma_device_type": 1 00:10:57.048 }, 00:10:57.048 { 00:10:57.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.048 "dma_device_type": 2 00:10:57.048 }, 00:10:57.048 { 00:10:57.048 "dma_device_id": "system", 00:10:57.048 "dma_device_type": 1 00:10:57.048 }, 00:10:57.048 { 00:10:57.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.048 "dma_device_type": 2 00:10:57.048 } 00:10:57.048 ], 00:10:57.048 "driver_specific": { 00:10:57.048 "raid": { 00:10:57.048 "uuid": "ec414845-07d9-490a-9524-5130a981044e", 00:10:57.048 "strip_size_kb": 64, 00:10:57.048 "state": "online", 00:10:57.048 "raid_level": "raid0", 00:10:57.048 "superblock": true, 00:10:57.048 "num_base_bdevs": 4, 00:10:57.048 "num_base_bdevs_discovered": 4, 00:10:57.048 "num_base_bdevs_operational": 4, 00:10:57.048 "base_bdevs_list": [ 00:10:57.048 { 00:10:57.048 "name": "pt1", 00:10:57.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.048 "is_configured": true, 00:10:57.048 "data_offset": 2048, 00:10:57.048 "data_size": 63488 00:10:57.048 }, 00:10:57.048 { 00:10:57.048 "name": "pt2", 00:10:57.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.048 "is_configured": true, 00:10:57.048 "data_offset": 2048, 00:10:57.048 "data_size": 63488 00:10:57.048 }, 00:10:57.048 { 00:10:57.048 "name": "pt3", 00:10:57.048 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.048 "is_configured": true, 00:10:57.048 "data_offset": 2048, 00:10:57.048 "data_size": 63488 00:10:57.048 }, 00:10:57.048 { 00:10:57.048 "name": "pt4", 00:10:57.048 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.048 "is_configured": true, 00:10:57.048 "data_offset": 2048, 00:10:57.048 "data_size": 63488 00:10:57.048 } 00:10:57.048 ] 00:10:57.048 } 00:10:57.048 } 00:10:57.048 }' 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:57.048 pt2 00:10:57.048 pt3 00:10:57.048 pt4' 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.048 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.308 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.309 [2024-11-18 03:59:53.813977] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ec414845-07d9-490a-9524-5130a981044e 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ec414845-07d9-490a-9524-5130a981044e ']' 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.309 [2024-11-18 03:59:53.845622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.309 [2024-11-18 03:59:53.845648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.309 [2024-11-18 03:59:53.845732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.309 [2024-11-18 03:59:53.845805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.309 [2024-11-18 03:59:53.845821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.309 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.569 03:59:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.569 [2024-11-18 03:59:53.997374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:57.569 [2024-11-18 03:59:53.999396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:57.569 [2024-11-18 03:59:53.999444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:57.569 [2024-11-18 03:59:53.999475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:57.569 [2024-11-18 03:59:53.999563] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:57.569 [2024-11-18 03:59:53.999612] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:57.569 [2024-11-18 03:59:53.999630] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:57.569 [2024-11-18 03:59:53.999649] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:57.569 [2024-11-18 03:59:53.999663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.569 [2024-11-18 03:59:53.999677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:57.569 request: 00:10:57.569 { 00:10:57.569 "name": "raid_bdev1", 00:10:57.569 "raid_level": "raid0", 00:10:57.569 "base_bdevs": [ 00:10:57.569 "malloc1", 00:10:57.569 "malloc2", 00:10:57.569 "malloc3", 00:10:57.569 "malloc4" 00:10:57.569 ], 00:10:57.569 "strip_size_kb": 64, 00:10:57.569 "superblock": false, 00:10:57.569 "method": "bdev_raid_create", 00:10:57.569 "req_id": 1 00:10:57.569 } 00:10:57.569 Got JSON-RPC error response 00:10:57.569 response: 00:10:57.569 { 00:10:57.569 "code": -17, 00:10:57.569 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:57.569 } 00:10:57.569 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:57.569 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:57.569 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.569 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.570 [2024-11-18 03:59:54.065243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:57.570 [2024-11-18 03:59:54.065288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.570 [2024-11-18 03:59:54.065305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:57.570 [2024-11-18 03:59:54.065315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.570 [2024-11-18 03:59:54.067745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.570 [2024-11-18 03:59:54.067784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:57.570 [2024-11-18 03:59:54.067870] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:57.570 [2024-11-18 03:59:54.067936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:57.570 pt1 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.570 "name": "raid_bdev1", 00:10:57.570 "uuid": "ec414845-07d9-490a-9524-5130a981044e", 00:10:57.570 "strip_size_kb": 64, 00:10:57.570 "state": "configuring", 00:10:57.570 "raid_level": "raid0", 00:10:57.570 "superblock": true, 00:10:57.570 "num_base_bdevs": 4, 00:10:57.570 "num_base_bdevs_discovered": 1, 00:10:57.570 "num_base_bdevs_operational": 4, 00:10:57.570 "base_bdevs_list": [ 00:10:57.570 { 00:10:57.570 "name": "pt1", 00:10:57.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.570 "is_configured": true, 00:10:57.570 "data_offset": 2048, 00:10:57.570 "data_size": 63488 00:10:57.570 }, 00:10:57.570 { 00:10:57.570 "name": null, 00:10:57.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.570 "is_configured": false, 00:10:57.570 "data_offset": 2048, 00:10:57.570 "data_size": 63488 00:10:57.570 }, 00:10:57.570 { 00:10:57.570 "name": null, 00:10:57.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.570 "is_configured": false, 00:10:57.570 "data_offset": 2048, 00:10:57.570 "data_size": 63488 00:10:57.570 }, 00:10:57.570 { 00:10:57.570 "name": null, 00:10:57.570 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.570 "is_configured": false, 00:10:57.570 "data_offset": 2048, 00:10:57.570 "data_size": 63488 00:10:57.570 } 00:10:57.570 ] 00:10:57.570 }' 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.570 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 [2024-11-18 03:59:54.556463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.141 [2024-11-18 03:59:54.556566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.141 [2024-11-18 03:59:54.556589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:58.141 [2024-11-18 03:59:54.556602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.141 [2024-11-18 03:59:54.557168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.141 [2024-11-18 03:59:54.557196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.141 [2024-11-18 03:59:54.557288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.141 [2024-11-18 03:59:54.557318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.141 pt2 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 [2024-11-18 03:59:54.568414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.141 "name": "raid_bdev1", 00:10:58.141 "uuid": "ec414845-07d9-490a-9524-5130a981044e", 00:10:58.141 "strip_size_kb": 64, 00:10:58.141 "state": "configuring", 00:10:58.141 "raid_level": "raid0", 00:10:58.141 "superblock": true, 00:10:58.141 "num_base_bdevs": 4, 00:10:58.141 "num_base_bdevs_discovered": 1, 00:10:58.141 "num_base_bdevs_operational": 4, 00:10:58.141 "base_bdevs_list": [ 00:10:58.141 { 00:10:58.141 "name": "pt1", 00:10:58.141 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.141 "is_configured": true, 00:10:58.141 "data_offset": 2048, 00:10:58.141 "data_size": 63488 00:10:58.141 }, 00:10:58.141 { 00:10:58.141 "name": null, 00:10:58.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.141 "is_configured": false, 00:10:58.141 "data_offset": 0, 00:10:58.141 "data_size": 63488 00:10:58.141 }, 00:10:58.141 { 00:10:58.141 "name": null, 00:10:58.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.141 "is_configured": false, 00:10:58.141 "data_offset": 2048, 00:10:58.141 "data_size": 63488 00:10:58.141 }, 00:10:58.141 { 00:10:58.141 "name": null, 00:10:58.141 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.141 "is_configured": false, 00:10:58.141 "data_offset": 2048, 00:10:58.141 "data_size": 63488 00:10:58.141 } 00:10:58.141 ] 00:10:58.141 }' 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.141 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.401 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:58.401 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.401 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.401 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.401 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.401 [2024-11-18 03:59:54.983705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.402 [2024-11-18 03:59:54.983776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.402 [2024-11-18 03:59:54.983799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:58.402 [2024-11-18 03:59:54.983809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.402 [2024-11-18 03:59:54.984324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.402 [2024-11-18 03:59:54.984354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.402 [2024-11-18 03:59:54.984446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.402 [2024-11-18 03:59:54.984473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.402 pt2 00:10:58.402 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.402 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.402 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.402 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:58.402 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.402 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.402 [2024-11-18 03:59:54.995645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:58.402 [2024-11-18 03:59:54.995695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.402 [2024-11-18 03:59:54.995719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:58.402 [2024-11-18 03:59:54.995729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.402 [2024-11-18 03:59:54.996124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.402 [2024-11-18 03:59:54.996149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:58.402 [2024-11-18 03:59:54.996214] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:58.402 [2024-11-18 03:59:54.996232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:58.402 pt3 00:10:58.402 03:59:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.402 03:59:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.402 [2024-11-18 03:59:55.007618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:58.402 [2024-11-18 03:59:55.007663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.402 [2024-11-18 03:59:55.007680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:58.402 [2024-11-18 03:59:55.007688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.402 [2024-11-18 03:59:55.008055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.402 [2024-11-18 03:59:55.008077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:58.402 [2024-11-18 03:59:55.008135] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:58.402 [2024-11-18 03:59:55.008152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:58.402 [2024-11-18 03:59:55.008283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:58.402 [2024-11-18 03:59:55.008298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.402 [2024-11-18 03:59:55.008540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:58.402 [2024-11-18 03:59:55.008692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:58.402 [2024-11-18 03:59:55.008712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:58.402 [2024-11-18 03:59:55.008852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.402 pt4 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.402 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.662 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.662 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.663 "name": "raid_bdev1", 00:10:58.663 "uuid": "ec414845-07d9-490a-9524-5130a981044e", 00:10:58.663 "strip_size_kb": 64, 00:10:58.663 "state": "online", 00:10:58.663 "raid_level": "raid0", 00:10:58.663 "superblock": true, 00:10:58.663 "num_base_bdevs": 4, 00:10:58.663 "num_base_bdevs_discovered": 4, 00:10:58.663 "num_base_bdevs_operational": 4, 00:10:58.663 "base_bdevs_list": [ 00:10:58.663 { 00:10:58.663 "name": "pt1", 00:10:58.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.663 "is_configured": true, 00:10:58.663 "data_offset": 2048, 00:10:58.663 "data_size": 63488 00:10:58.663 }, 00:10:58.663 { 00:10:58.663 "name": "pt2", 00:10:58.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.663 "is_configured": true, 00:10:58.663 "data_offset": 2048, 00:10:58.663 "data_size": 63488 00:10:58.663 }, 00:10:58.663 { 00:10:58.663 "name": "pt3", 00:10:58.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.663 "is_configured": true, 00:10:58.663 "data_offset": 2048, 00:10:58.663 "data_size": 63488 00:10:58.663 }, 00:10:58.663 { 00:10:58.663 "name": "pt4", 00:10:58.663 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.663 "is_configured": true, 00:10:58.663 "data_offset": 2048, 00:10:58.663 "data_size": 63488 00:10:58.663 } 00:10:58.663 ] 00:10:58.663 }' 00:10:58.663 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.663 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.923 [2024-11-18 03:59:55.407388] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.923 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.924 "name": "raid_bdev1", 00:10:58.924 "aliases": [ 00:10:58.924 "ec414845-07d9-490a-9524-5130a981044e" 00:10:58.924 ], 00:10:58.924 "product_name": "Raid Volume", 00:10:58.924 "block_size": 512, 00:10:58.924 "num_blocks": 253952, 00:10:58.924 "uuid": "ec414845-07d9-490a-9524-5130a981044e", 00:10:58.924 "assigned_rate_limits": { 00:10:58.924 "rw_ios_per_sec": 0, 00:10:58.924 "rw_mbytes_per_sec": 0, 00:10:58.924 "r_mbytes_per_sec": 0, 00:10:58.924 "w_mbytes_per_sec": 0 00:10:58.924 }, 00:10:58.924 "claimed": false, 00:10:58.924 "zoned": false, 00:10:58.924 "supported_io_types": { 00:10:58.924 "read": true, 00:10:58.924 "write": true, 00:10:58.924 "unmap": true, 00:10:58.924 "flush": true, 00:10:58.924 "reset": true, 00:10:58.924 "nvme_admin": false, 00:10:58.924 "nvme_io": false, 00:10:58.924 "nvme_io_md": false, 00:10:58.924 "write_zeroes": true, 00:10:58.924 "zcopy": false, 00:10:58.924 "get_zone_info": false, 00:10:58.924 "zone_management": false, 00:10:58.924 "zone_append": false, 00:10:58.924 "compare": false, 00:10:58.924 "compare_and_write": false, 00:10:58.924 "abort": false, 00:10:58.924 "seek_hole": false, 00:10:58.924 "seek_data": false, 00:10:58.924 "copy": false, 00:10:58.924 "nvme_iov_md": false 00:10:58.924 }, 00:10:58.924 "memory_domains": [ 00:10:58.924 { 00:10:58.924 "dma_device_id": "system", 00:10:58.924 "dma_device_type": 1 00:10:58.924 }, 00:10:58.924 { 00:10:58.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.924 "dma_device_type": 2 00:10:58.924 }, 00:10:58.924 { 00:10:58.924 "dma_device_id": "system", 00:10:58.924 "dma_device_type": 1 00:10:58.924 }, 00:10:58.924 { 00:10:58.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.924 "dma_device_type": 2 00:10:58.924 }, 00:10:58.924 { 00:10:58.924 "dma_device_id": "system", 00:10:58.924 "dma_device_type": 1 00:10:58.924 }, 00:10:58.924 { 00:10:58.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.924 "dma_device_type": 2 00:10:58.924 }, 00:10:58.924 { 00:10:58.924 "dma_device_id": "system", 00:10:58.924 "dma_device_type": 1 00:10:58.924 }, 00:10:58.924 { 00:10:58.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.924 "dma_device_type": 2 00:10:58.924 } 00:10:58.924 ], 00:10:58.924 "driver_specific": { 00:10:58.924 "raid": { 00:10:58.924 "uuid": "ec414845-07d9-490a-9524-5130a981044e", 00:10:58.924 "strip_size_kb": 64, 00:10:58.924 "state": "online", 00:10:58.924 "raid_level": "raid0", 00:10:58.924 "superblock": true, 00:10:58.924 "num_base_bdevs": 4, 00:10:58.924 "num_base_bdevs_discovered": 4, 00:10:58.924 "num_base_bdevs_operational": 4, 00:10:58.924 "base_bdevs_list": [ 00:10:58.924 { 00:10:58.924 "name": "pt1", 00:10:58.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.924 "is_configured": true, 00:10:58.924 "data_offset": 2048, 00:10:58.924 "data_size": 63488 00:10:58.924 }, 00:10:58.924 { 00:10:58.924 "name": "pt2", 00:10:58.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.924 "is_configured": true, 00:10:58.924 "data_offset": 2048, 00:10:58.924 "data_size": 63488 00:10:58.924 }, 00:10:58.924 { 00:10:58.924 "name": "pt3", 00:10:58.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.924 "is_configured": true, 00:10:58.924 "data_offset": 2048, 00:10:58.924 "data_size": 63488 00:10:58.924 }, 00:10:58.924 { 00:10:58.924 "name": "pt4", 00:10:58.924 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.924 "is_configured": true, 00:10:58.924 "data_offset": 2048, 00:10:58.924 "data_size": 63488 00:10:58.924 } 00:10:58.924 ] 00:10:58.924 } 00:10:58.924 } 00:10:58.924 }' 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:58.924 pt2 00:10:58.924 pt3 00:10:58.924 pt4' 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.924 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.184 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.185 [2024-11-18 03:59:55.686807] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ec414845-07d9-490a-9524-5130a981044e '!=' ec414845-07d9-490a-9524-5130a981044e ']' 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70712 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70712 ']' 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70712 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70712 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.185 killing process with pid 70712 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70712' 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70712 00:10:59.185 [2024-11-18 03:59:55.743440] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.185 [2024-11-18 03:59:55.743559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.185 03:59:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70712 00:10:59.185 [2024-11-18 03:59:55.743641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.185 [2024-11-18 03:59:55.743653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:59.754 [2024-11-18 03:59:56.163460] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.135 03:59:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:01.135 00:11:01.136 real 0m5.467s 00:11:01.136 user 0m7.663s 00:11:01.136 sys 0m0.976s 00:11:01.136 03:59:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.136 03:59:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.136 ************************************ 00:11:01.136 END TEST raid_superblock_test 00:11:01.136 ************************************ 00:11:01.136 03:59:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:01.136 03:59:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:01.136 03:59:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.136 03:59:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.136 ************************************ 00:11:01.136 START TEST raid_read_error_test 00:11:01.136 ************************************ 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.g7wi3xUAQT 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70971 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70971 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70971 ']' 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.136 03:59:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.136 [2024-11-18 03:59:57.514396] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:01.136 [2024-11-18 03:59:57.514525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70971 ] 00:11:01.136 [2024-11-18 03:59:57.689237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.396 [2024-11-18 03:59:57.826927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.656 [2024-11-18 03:59:58.058739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.656 [2024-11-18 03:59:58.058784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.916 BaseBdev1_malloc 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.916 true 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.916 [2024-11-18 03:59:58.408375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:01.916 [2024-11-18 03:59:58.408442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.916 [2024-11-18 03:59:58.408463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:01.916 [2024-11-18 03:59:58.408475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.916 [2024-11-18 03:59:58.410751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.916 [2024-11-18 03:59:58.410789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:01.916 BaseBdev1 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.916 BaseBdev2_malloc 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.916 true 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.916 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.916 [2024-11-18 03:59:58.478950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:01.916 [2024-11-18 03:59:58.479004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.917 [2024-11-18 03:59:58.479020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:01.917 [2024-11-18 03:59:58.479031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.917 [2024-11-18 03:59:58.481324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.917 [2024-11-18 03:59:58.481361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:01.917 BaseBdev2 00:11:01.917 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.917 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.917 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:01.917 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.917 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.917 BaseBdev3_malloc 00:11:01.917 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.917 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:01.917 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.917 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.178 true 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.178 [2024-11-18 03:59:58.566227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:02.178 [2024-11-18 03:59:58.566278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.178 [2024-11-18 03:59:58.566294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:02.178 [2024-11-18 03:59:58.566304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.178 [2024-11-18 03:59:58.568690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.178 [2024-11-18 03:59:58.568726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:02.178 BaseBdev3 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.178 BaseBdev4_malloc 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.178 true 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.178 [2024-11-18 03:59:58.637864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:02.178 [2024-11-18 03:59:58.637916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.178 [2024-11-18 03:59:58.637932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:02.178 [2024-11-18 03:59:58.637943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.178 [2024-11-18 03:59:58.640219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.178 [2024-11-18 03:59:58.640257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:02.178 BaseBdev4 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.178 [2024-11-18 03:59:58.649913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.178 [2024-11-18 03:59:58.651958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.178 [2024-11-18 03:59:58.652031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.178 [2024-11-18 03:59:58.652095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:02.178 [2024-11-18 03:59:58.652308] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:02.178 [2024-11-18 03:59:58.652329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:02.178 [2024-11-18 03:59:58.652559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:02.178 [2024-11-18 03:59:58.652716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:02.178 [2024-11-18 03:59:58.652732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:02.178 [2024-11-18 03:59:58.652897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.178 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.179 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.179 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.179 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.179 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.179 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.179 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.179 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.179 "name": "raid_bdev1", 00:11:02.179 "uuid": "dc18e683-475c-467c-b7e2-01f56c096578", 00:11:02.179 "strip_size_kb": 64, 00:11:02.179 "state": "online", 00:11:02.179 "raid_level": "raid0", 00:11:02.179 "superblock": true, 00:11:02.179 "num_base_bdevs": 4, 00:11:02.179 "num_base_bdevs_discovered": 4, 00:11:02.179 "num_base_bdevs_operational": 4, 00:11:02.179 "base_bdevs_list": [ 00:11:02.179 { 00:11:02.179 "name": "BaseBdev1", 00:11:02.179 "uuid": "7029ea23-b357-50ff-b3ed-e1f761e9ed32", 00:11:02.179 "is_configured": true, 00:11:02.179 "data_offset": 2048, 00:11:02.179 "data_size": 63488 00:11:02.179 }, 00:11:02.179 { 00:11:02.179 "name": "BaseBdev2", 00:11:02.179 "uuid": "711c25d7-a828-505e-ab65-2da810f6fac9", 00:11:02.179 "is_configured": true, 00:11:02.179 "data_offset": 2048, 00:11:02.179 "data_size": 63488 00:11:02.179 }, 00:11:02.179 { 00:11:02.179 "name": "BaseBdev3", 00:11:02.179 "uuid": "5e58edfe-5902-51d2-b872-468980f6319f", 00:11:02.179 "is_configured": true, 00:11:02.179 "data_offset": 2048, 00:11:02.179 "data_size": 63488 00:11:02.179 }, 00:11:02.179 { 00:11:02.179 "name": "BaseBdev4", 00:11:02.179 "uuid": "66bada06-fab5-5c82-b8b8-6069ed2de8d7", 00:11:02.179 "is_configured": true, 00:11:02.179 "data_offset": 2048, 00:11:02.179 "data_size": 63488 00:11:02.179 } 00:11:02.179 ] 00:11:02.179 }' 00:11:02.179 03:59:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.179 03:59:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.447 03:59:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:02.447 03:59:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:02.707 [2024-11-18 03:59:59.178508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.646 "name": "raid_bdev1", 00:11:03.646 "uuid": "dc18e683-475c-467c-b7e2-01f56c096578", 00:11:03.646 "strip_size_kb": 64, 00:11:03.646 "state": "online", 00:11:03.646 "raid_level": "raid0", 00:11:03.646 "superblock": true, 00:11:03.646 "num_base_bdevs": 4, 00:11:03.646 "num_base_bdevs_discovered": 4, 00:11:03.646 "num_base_bdevs_operational": 4, 00:11:03.646 "base_bdevs_list": [ 00:11:03.646 { 00:11:03.646 "name": "BaseBdev1", 00:11:03.646 "uuid": "7029ea23-b357-50ff-b3ed-e1f761e9ed32", 00:11:03.646 "is_configured": true, 00:11:03.646 "data_offset": 2048, 00:11:03.646 "data_size": 63488 00:11:03.646 }, 00:11:03.646 { 00:11:03.646 "name": "BaseBdev2", 00:11:03.646 "uuid": "711c25d7-a828-505e-ab65-2da810f6fac9", 00:11:03.646 "is_configured": true, 00:11:03.646 "data_offset": 2048, 00:11:03.646 "data_size": 63488 00:11:03.646 }, 00:11:03.646 { 00:11:03.646 "name": "BaseBdev3", 00:11:03.646 "uuid": "5e58edfe-5902-51d2-b872-468980f6319f", 00:11:03.646 "is_configured": true, 00:11:03.646 "data_offset": 2048, 00:11:03.646 "data_size": 63488 00:11:03.646 }, 00:11:03.646 { 00:11:03.646 "name": "BaseBdev4", 00:11:03.646 "uuid": "66bada06-fab5-5c82-b8b8-6069ed2de8d7", 00:11:03.646 "is_configured": true, 00:11:03.646 "data_offset": 2048, 00:11:03.646 "data_size": 63488 00:11:03.646 } 00:11:03.646 ] 00:11:03.646 }' 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.646 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.905 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:03.905 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.905 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.905 [2024-11-18 04:00:00.530765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.905 [2024-11-18 04:00:00.530816] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.905 [2024-11-18 04:00:00.533559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.905 [2024-11-18 04:00:00.533633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.905 [2024-11-18 04:00:00.533686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.905 [2024-11-18 04:00:00.533700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:03.905 { 00:11:03.905 "results": [ 00:11:03.905 { 00:11:03.905 "job": "raid_bdev1", 00:11:03.905 "core_mask": "0x1", 00:11:03.905 "workload": "randrw", 00:11:03.905 "percentage": 50, 00:11:03.905 "status": "finished", 00:11:03.905 "queue_depth": 1, 00:11:03.905 "io_size": 131072, 00:11:03.905 "runtime": 1.352769, 00:11:03.905 "iops": 14029.002734391459, 00:11:03.905 "mibps": 1753.6253417989324, 00:11:03.905 "io_failed": 1, 00:11:03.905 "io_timeout": 0, 00:11:03.905 "avg_latency_us": 100.62169987467186, 00:11:03.905 "min_latency_us": 24.929257641921396, 00:11:03.905 "max_latency_us": 1337.907423580786 00:11:03.905 } 00:11:03.905 ], 00:11:03.905 "core_count": 1 00:11:03.905 } 00:11:03.905 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.905 04:00:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70971 00:11:03.905 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70971 ']' 00:11:03.905 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70971 00:11:03.905 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:03.905 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.905 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70971 00:11:04.165 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.165 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.165 killing process with pid 70971 00:11:04.165 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70971' 00:11:04.165 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70971 00:11:04.165 [2024-11-18 04:00:00.566628] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:04.165 04:00:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70971 00:11:04.425 [2024-11-18 04:00:00.913258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.806 04:00:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.g7wi3xUAQT 00:11:05.806 04:00:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:05.806 04:00:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:05.806 04:00:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:05.806 04:00:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:05.806 04:00:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.806 04:00:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.806 04:00:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:05.806 00:11:05.806 real 0m4.747s 00:11:05.806 user 0m5.465s 00:11:05.806 sys 0m0.657s 00:11:05.806 04:00:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.806 04:00:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.806 ************************************ 00:11:05.806 END TEST raid_read_error_test 00:11:05.806 ************************************ 00:11:05.806 04:00:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:05.806 04:00:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:05.806 04:00:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.806 04:00:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.806 ************************************ 00:11:05.806 START TEST raid_write_error_test 00:11:05.806 ************************************ 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pzkHYA4ydi 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71117 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71117 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:05.806 04:00:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71117 ']' 00:11:05.807 04:00:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.807 04:00:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.807 04:00:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.807 04:00:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.807 04:00:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.807 [2024-11-18 04:00:02.332118] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:05.807 [2024-11-18 04:00:02.332250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71117 ] 00:11:06.066 [2024-11-18 04:00:02.506393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.066 [2024-11-18 04:00:02.637978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.325 [2024-11-18 04:00:02.867264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.325 [2024-11-18 04:00:02.867320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.585 BaseBdev1_malloc 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.585 true 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.585 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.845 [2024-11-18 04:00:03.226866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:06.845 [2024-11-18 04:00:03.226929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.845 [2024-11-18 04:00:03.226951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:06.845 [2024-11-18 04:00:03.226962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.845 [2024-11-18 04:00:03.229378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.845 [2024-11-18 04:00:03.229417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:06.846 BaseBdev1 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 BaseBdev2_malloc 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 true 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 [2024-11-18 04:00:03.299555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:06.846 [2024-11-18 04:00:03.299617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.846 [2024-11-18 04:00:03.299634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:06.846 [2024-11-18 04:00:03.299646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.846 [2024-11-18 04:00:03.302049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.846 [2024-11-18 04:00:03.302085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:06.846 BaseBdev2 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 BaseBdev3_malloc 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 true 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 [2024-11-18 04:00:03.382843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:06.846 [2024-11-18 04:00:03.382897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.846 [2024-11-18 04:00:03.382915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:06.846 [2024-11-18 04:00:03.382926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.846 [2024-11-18 04:00:03.385241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.846 [2024-11-18 04:00:03.385278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:06.846 BaseBdev3 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 BaseBdev4_malloc 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 true 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 [2024-11-18 04:00:03.457040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:06.846 [2024-11-18 04:00:03.457090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.846 [2024-11-18 04:00:03.457107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:06.846 [2024-11-18 04:00:03.457118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.846 [2024-11-18 04:00:03.459319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.846 [2024-11-18 04:00:03.459355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:06.846 BaseBdev4 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.846 [2024-11-18 04:00:03.469083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.846 [2024-11-18 04:00:03.471012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.846 [2024-11-18 04:00:03.471083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.846 [2024-11-18 04:00:03.471144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.846 [2024-11-18 04:00:03.471353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:06.846 [2024-11-18 04:00:03.471375] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:06.846 [2024-11-18 04:00:03.471631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:06.846 [2024-11-18 04:00:03.471798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:06.846 [2024-11-18 04:00:03.471815] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:06.846 [2024-11-18 04:00:03.471978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.846 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.106 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.106 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.106 "name": "raid_bdev1", 00:11:07.106 "uuid": "589514fb-9966-46a0-bfe3-5faa9ba1857f", 00:11:07.106 "strip_size_kb": 64, 00:11:07.106 "state": "online", 00:11:07.106 "raid_level": "raid0", 00:11:07.106 "superblock": true, 00:11:07.106 "num_base_bdevs": 4, 00:11:07.106 "num_base_bdevs_discovered": 4, 00:11:07.106 "num_base_bdevs_operational": 4, 00:11:07.106 "base_bdevs_list": [ 00:11:07.106 { 00:11:07.106 "name": "BaseBdev1", 00:11:07.106 "uuid": "27f0dd1a-dfaa-5a30-a969-5d695bd05dc7", 00:11:07.106 "is_configured": true, 00:11:07.106 "data_offset": 2048, 00:11:07.106 "data_size": 63488 00:11:07.106 }, 00:11:07.106 { 00:11:07.106 "name": "BaseBdev2", 00:11:07.106 "uuid": "f986af70-049b-5526-bb66-60edfc8b4f2d", 00:11:07.106 "is_configured": true, 00:11:07.106 "data_offset": 2048, 00:11:07.106 "data_size": 63488 00:11:07.106 }, 00:11:07.106 { 00:11:07.106 "name": "BaseBdev3", 00:11:07.106 "uuid": "f8388375-b663-5db2-8cdd-c3b63049f09f", 00:11:07.106 "is_configured": true, 00:11:07.106 "data_offset": 2048, 00:11:07.106 "data_size": 63488 00:11:07.106 }, 00:11:07.106 { 00:11:07.106 "name": "BaseBdev4", 00:11:07.106 "uuid": "9c9d7dc4-5585-5c45-b41a-66c3ede634c8", 00:11:07.106 "is_configured": true, 00:11:07.106 "data_offset": 2048, 00:11:07.106 "data_size": 63488 00:11:07.106 } 00:11:07.106 ] 00:11:07.106 }' 00:11:07.106 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.106 04:00:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.366 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:07.366 04:00:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:07.366 [2024-11-18 04:00:03.985762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.307 04:00:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.567 04:00:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.567 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.567 "name": "raid_bdev1", 00:11:08.567 "uuid": "589514fb-9966-46a0-bfe3-5faa9ba1857f", 00:11:08.567 "strip_size_kb": 64, 00:11:08.567 "state": "online", 00:11:08.567 "raid_level": "raid0", 00:11:08.567 "superblock": true, 00:11:08.567 "num_base_bdevs": 4, 00:11:08.567 "num_base_bdevs_discovered": 4, 00:11:08.567 "num_base_bdevs_operational": 4, 00:11:08.567 "base_bdevs_list": [ 00:11:08.567 { 00:11:08.567 "name": "BaseBdev1", 00:11:08.567 "uuid": "27f0dd1a-dfaa-5a30-a969-5d695bd05dc7", 00:11:08.567 "is_configured": true, 00:11:08.567 "data_offset": 2048, 00:11:08.567 "data_size": 63488 00:11:08.567 }, 00:11:08.567 { 00:11:08.567 "name": "BaseBdev2", 00:11:08.567 "uuid": "f986af70-049b-5526-bb66-60edfc8b4f2d", 00:11:08.567 "is_configured": true, 00:11:08.567 "data_offset": 2048, 00:11:08.567 "data_size": 63488 00:11:08.567 }, 00:11:08.567 { 00:11:08.567 "name": "BaseBdev3", 00:11:08.567 "uuid": "f8388375-b663-5db2-8cdd-c3b63049f09f", 00:11:08.567 "is_configured": true, 00:11:08.567 "data_offset": 2048, 00:11:08.567 "data_size": 63488 00:11:08.567 }, 00:11:08.567 { 00:11:08.567 "name": "BaseBdev4", 00:11:08.567 "uuid": "9c9d7dc4-5585-5c45-b41a-66c3ede634c8", 00:11:08.567 "is_configured": true, 00:11:08.567 "data_offset": 2048, 00:11:08.567 "data_size": 63488 00:11:08.567 } 00:11:08.567 ] 00:11:08.567 }' 00:11:08.567 04:00:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.567 04:00:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.827 04:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:08.827 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.827 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.827 [2024-11-18 04:00:05.346204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.827 [2024-11-18 04:00:05.346255] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.827 [2024-11-18 04:00:05.348882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.827 [2024-11-18 04:00:05.348946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.827 [2024-11-18 04:00:05.348995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.827 [2024-11-18 04:00:05.349009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:08.827 { 00:11:08.827 "results": [ 00:11:08.827 { 00:11:08.827 "job": "raid_bdev1", 00:11:08.827 "core_mask": "0x1", 00:11:08.827 "workload": "randrw", 00:11:08.827 "percentage": 50, 00:11:08.827 "status": "finished", 00:11:08.827 "queue_depth": 1, 00:11:08.827 "io_size": 131072, 00:11:08.827 "runtime": 1.360979, 00:11:08.827 "iops": 14017.115620446752, 00:11:08.827 "mibps": 1752.139452555844, 00:11:08.827 "io_failed": 1, 00:11:08.827 "io_timeout": 0, 00:11:08.827 "avg_latency_us": 100.81972577755947, 00:11:08.827 "min_latency_us": 25.152838427947597, 00:11:08.827 "max_latency_us": 1294.9799126637554 00:11:08.827 } 00:11:08.827 ], 00:11:08.827 "core_count": 1 00:11:08.827 } 00:11:08.827 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.827 04:00:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71117 00:11:08.827 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71117 ']' 00:11:08.827 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71117 00:11:08.827 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:08.827 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.827 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71117 00:11:08.827 killing process with pid 71117 00:11:08.828 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.828 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.828 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71117' 00:11:08.828 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71117 00:11:08.828 [2024-11-18 04:00:05.391025] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.828 04:00:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71117 00:11:09.398 [2024-11-18 04:00:05.740723] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.336 04:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pzkHYA4ydi 00:11:10.597 04:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:10.597 04:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:10.597 04:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:10.597 04:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:10.597 04:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:10.597 04:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:10.597 04:00:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:10.597 00:11:10.597 real 0m4.760s 00:11:10.597 user 0m5.461s 00:11:10.597 sys 0m0.661s 00:11:10.597 ************************************ 00:11:10.597 END TEST raid_write_error_test 00:11:10.597 ************************************ 00:11:10.597 04:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.597 04:00:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.597 04:00:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:10.597 04:00:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:10.597 04:00:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:10.597 04:00:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.597 04:00:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.597 ************************************ 00:11:10.597 START TEST raid_state_function_test 00:11:10.597 ************************************ 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:10.597 Process raid pid: 71266 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71266 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71266' 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71266 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71266 ']' 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.597 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.597 [2024-11-18 04:00:07.168208] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:10.597 [2024-11-18 04:00:07.168436] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.857 [2024-11-18 04:00:07.350410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.857 [2024-11-18 04:00:07.481099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.117 [2024-11-18 04:00:07.717921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.117 [2024-11-18 04:00:07.718068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.377 [2024-11-18 04:00:07.991743] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.377 [2024-11-18 04:00:07.991885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.377 [2024-11-18 04:00:07.991916] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.377 [2024-11-18 04:00:07.991941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.377 [2024-11-18 04:00:07.991959] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.377 [2024-11-18 04:00:07.991980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.377 [2024-11-18 04:00:07.991997] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.377 [2024-11-18 04:00:07.992017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.377 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.377 04:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.377 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.637 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.637 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.637 "name": "Existed_Raid", 00:11:11.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.637 "strip_size_kb": 64, 00:11:11.637 "state": "configuring", 00:11:11.637 "raid_level": "concat", 00:11:11.637 "superblock": false, 00:11:11.637 "num_base_bdevs": 4, 00:11:11.637 "num_base_bdevs_discovered": 0, 00:11:11.637 "num_base_bdevs_operational": 4, 00:11:11.637 "base_bdevs_list": [ 00:11:11.637 { 00:11:11.637 "name": "BaseBdev1", 00:11:11.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.637 "is_configured": false, 00:11:11.637 "data_offset": 0, 00:11:11.637 "data_size": 0 00:11:11.637 }, 00:11:11.637 { 00:11:11.637 "name": "BaseBdev2", 00:11:11.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.637 "is_configured": false, 00:11:11.637 "data_offset": 0, 00:11:11.637 "data_size": 0 00:11:11.637 }, 00:11:11.637 { 00:11:11.637 "name": "BaseBdev3", 00:11:11.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.637 "is_configured": false, 00:11:11.637 "data_offset": 0, 00:11:11.637 "data_size": 0 00:11:11.637 }, 00:11:11.637 { 00:11:11.637 "name": "BaseBdev4", 00:11:11.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.637 "is_configured": false, 00:11:11.637 "data_offset": 0, 00:11:11.637 "data_size": 0 00:11:11.637 } 00:11:11.637 ] 00:11:11.637 }' 00:11:11.637 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.637 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.897 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.898 [2024-11-18 04:00:08.422955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.898 [2024-11-18 04:00:08.422996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.898 [2024-11-18 04:00:08.434945] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.898 [2024-11-18 04:00:08.434984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.898 [2024-11-18 04:00:08.434992] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.898 [2024-11-18 04:00:08.435001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.898 [2024-11-18 04:00:08.435006] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.898 [2024-11-18 04:00:08.435015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.898 [2024-11-18 04:00:08.435020] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.898 [2024-11-18 04:00:08.435028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.898 [2024-11-18 04:00:08.488463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.898 BaseBdev1 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.898 [ 00:11:11.898 { 00:11:11.898 "name": "BaseBdev1", 00:11:11.898 "aliases": [ 00:11:11.898 "fa980740-745e-4ceb-ab02-b44551cf85ba" 00:11:11.898 ], 00:11:11.898 "product_name": "Malloc disk", 00:11:11.898 "block_size": 512, 00:11:11.898 "num_blocks": 65536, 00:11:11.898 "uuid": "fa980740-745e-4ceb-ab02-b44551cf85ba", 00:11:11.898 "assigned_rate_limits": { 00:11:11.898 "rw_ios_per_sec": 0, 00:11:11.898 "rw_mbytes_per_sec": 0, 00:11:11.898 "r_mbytes_per_sec": 0, 00:11:11.898 "w_mbytes_per_sec": 0 00:11:11.898 }, 00:11:11.898 "claimed": true, 00:11:11.898 "claim_type": "exclusive_write", 00:11:11.898 "zoned": false, 00:11:11.898 "supported_io_types": { 00:11:11.898 "read": true, 00:11:11.898 "write": true, 00:11:11.898 "unmap": true, 00:11:11.898 "flush": true, 00:11:11.898 "reset": true, 00:11:11.898 "nvme_admin": false, 00:11:11.898 "nvme_io": false, 00:11:11.898 "nvme_io_md": false, 00:11:11.898 "write_zeroes": true, 00:11:11.898 "zcopy": true, 00:11:11.898 "get_zone_info": false, 00:11:11.898 "zone_management": false, 00:11:11.898 "zone_append": false, 00:11:11.898 "compare": false, 00:11:11.898 "compare_and_write": false, 00:11:11.898 "abort": true, 00:11:11.898 "seek_hole": false, 00:11:11.898 "seek_data": false, 00:11:11.898 "copy": true, 00:11:11.898 "nvme_iov_md": false 00:11:11.898 }, 00:11:11.898 "memory_domains": [ 00:11:11.898 { 00:11:11.898 "dma_device_id": "system", 00:11:11.898 "dma_device_type": 1 00:11:11.898 }, 00:11:11.898 { 00:11:11.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.898 "dma_device_type": 2 00:11:11.898 } 00:11:11.898 ], 00:11:11.898 "driver_specific": {} 00:11:11.898 } 00:11:11.898 ] 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.898 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.158 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.158 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.158 "name": "Existed_Raid", 00:11:12.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.158 "strip_size_kb": 64, 00:11:12.158 "state": "configuring", 00:11:12.158 "raid_level": "concat", 00:11:12.158 "superblock": false, 00:11:12.158 "num_base_bdevs": 4, 00:11:12.158 "num_base_bdevs_discovered": 1, 00:11:12.158 "num_base_bdevs_operational": 4, 00:11:12.158 "base_bdevs_list": [ 00:11:12.158 { 00:11:12.158 "name": "BaseBdev1", 00:11:12.158 "uuid": "fa980740-745e-4ceb-ab02-b44551cf85ba", 00:11:12.158 "is_configured": true, 00:11:12.158 "data_offset": 0, 00:11:12.158 "data_size": 65536 00:11:12.158 }, 00:11:12.158 { 00:11:12.158 "name": "BaseBdev2", 00:11:12.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.158 "is_configured": false, 00:11:12.158 "data_offset": 0, 00:11:12.158 "data_size": 0 00:11:12.158 }, 00:11:12.158 { 00:11:12.158 "name": "BaseBdev3", 00:11:12.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.158 "is_configured": false, 00:11:12.158 "data_offset": 0, 00:11:12.158 "data_size": 0 00:11:12.158 }, 00:11:12.158 { 00:11:12.158 "name": "BaseBdev4", 00:11:12.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.158 "is_configured": false, 00:11:12.158 "data_offset": 0, 00:11:12.158 "data_size": 0 00:11:12.158 } 00:11:12.158 ] 00:11:12.158 }' 00:11:12.158 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.158 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.427 04:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.427 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.427 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.427 [2024-11-18 04:00:08.995667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.427 [2024-11-18 04:00:08.995806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:12.427 04:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.427 [2024-11-18 04:00:09.007716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.427 [2024-11-18 04:00:09.009865] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.427 [2024-11-18 04:00:09.009943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.427 [2024-11-18 04:00:09.009970] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:12.427 [2024-11-18 04:00:09.009993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:12.427 [2024-11-18 04:00:09.010010] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:12.427 [2024-11-18 04:00:09.010030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.427 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.696 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.696 "name": "Existed_Raid", 00:11:12.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.696 "strip_size_kb": 64, 00:11:12.696 "state": "configuring", 00:11:12.696 "raid_level": "concat", 00:11:12.696 "superblock": false, 00:11:12.696 "num_base_bdevs": 4, 00:11:12.696 "num_base_bdevs_discovered": 1, 00:11:12.696 "num_base_bdevs_operational": 4, 00:11:12.696 "base_bdevs_list": [ 00:11:12.696 { 00:11:12.696 "name": "BaseBdev1", 00:11:12.696 "uuid": "fa980740-745e-4ceb-ab02-b44551cf85ba", 00:11:12.696 "is_configured": true, 00:11:12.696 "data_offset": 0, 00:11:12.696 "data_size": 65536 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "name": "BaseBdev2", 00:11:12.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.696 "is_configured": false, 00:11:12.696 "data_offset": 0, 00:11:12.696 "data_size": 0 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "name": "BaseBdev3", 00:11:12.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.696 "is_configured": false, 00:11:12.696 "data_offset": 0, 00:11:12.696 "data_size": 0 00:11:12.696 }, 00:11:12.696 { 00:11:12.696 "name": "BaseBdev4", 00:11:12.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.696 "is_configured": false, 00:11:12.696 "data_offset": 0, 00:11:12.696 "data_size": 0 00:11:12.696 } 00:11:12.696 ] 00:11:12.696 }' 00:11:12.696 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.696 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.956 [2024-11-18 04:00:09.472525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.956 BaseBdev2 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.956 [ 00:11:12.956 { 00:11:12.956 "name": "BaseBdev2", 00:11:12.956 "aliases": [ 00:11:12.956 "ace88c85-7d3a-48ab-bc07-e47f167a8f29" 00:11:12.956 ], 00:11:12.956 "product_name": "Malloc disk", 00:11:12.956 "block_size": 512, 00:11:12.956 "num_blocks": 65536, 00:11:12.956 "uuid": "ace88c85-7d3a-48ab-bc07-e47f167a8f29", 00:11:12.956 "assigned_rate_limits": { 00:11:12.956 "rw_ios_per_sec": 0, 00:11:12.956 "rw_mbytes_per_sec": 0, 00:11:12.956 "r_mbytes_per_sec": 0, 00:11:12.956 "w_mbytes_per_sec": 0 00:11:12.956 }, 00:11:12.956 "claimed": true, 00:11:12.956 "claim_type": "exclusive_write", 00:11:12.956 "zoned": false, 00:11:12.956 "supported_io_types": { 00:11:12.956 "read": true, 00:11:12.956 "write": true, 00:11:12.956 "unmap": true, 00:11:12.956 "flush": true, 00:11:12.956 "reset": true, 00:11:12.956 "nvme_admin": false, 00:11:12.956 "nvme_io": false, 00:11:12.956 "nvme_io_md": false, 00:11:12.956 "write_zeroes": true, 00:11:12.956 "zcopy": true, 00:11:12.956 "get_zone_info": false, 00:11:12.956 "zone_management": false, 00:11:12.956 "zone_append": false, 00:11:12.956 "compare": false, 00:11:12.956 "compare_and_write": false, 00:11:12.956 "abort": true, 00:11:12.956 "seek_hole": false, 00:11:12.956 "seek_data": false, 00:11:12.956 "copy": true, 00:11:12.956 "nvme_iov_md": false 00:11:12.956 }, 00:11:12.956 "memory_domains": [ 00:11:12.956 { 00:11:12.956 "dma_device_id": "system", 00:11:12.956 "dma_device_type": 1 00:11:12.956 }, 00:11:12.956 { 00:11:12.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.956 "dma_device_type": 2 00:11:12.956 } 00:11:12.956 ], 00:11:12.956 "driver_specific": {} 00:11:12.956 } 00:11:12.956 ] 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.956 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.956 "name": "Existed_Raid", 00:11:12.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.956 "strip_size_kb": 64, 00:11:12.956 "state": "configuring", 00:11:12.956 "raid_level": "concat", 00:11:12.956 "superblock": false, 00:11:12.956 "num_base_bdevs": 4, 00:11:12.957 "num_base_bdevs_discovered": 2, 00:11:12.957 "num_base_bdevs_operational": 4, 00:11:12.957 "base_bdevs_list": [ 00:11:12.957 { 00:11:12.957 "name": "BaseBdev1", 00:11:12.957 "uuid": "fa980740-745e-4ceb-ab02-b44551cf85ba", 00:11:12.957 "is_configured": true, 00:11:12.957 "data_offset": 0, 00:11:12.957 "data_size": 65536 00:11:12.957 }, 00:11:12.957 { 00:11:12.957 "name": "BaseBdev2", 00:11:12.957 "uuid": "ace88c85-7d3a-48ab-bc07-e47f167a8f29", 00:11:12.957 "is_configured": true, 00:11:12.957 "data_offset": 0, 00:11:12.957 "data_size": 65536 00:11:12.957 }, 00:11:12.957 { 00:11:12.957 "name": "BaseBdev3", 00:11:12.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.957 "is_configured": false, 00:11:12.957 "data_offset": 0, 00:11:12.957 "data_size": 0 00:11:12.957 }, 00:11:12.957 { 00:11:12.957 "name": "BaseBdev4", 00:11:12.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.957 "is_configured": false, 00:11:12.957 "data_offset": 0, 00:11:12.957 "data_size": 0 00:11:12.957 } 00:11:12.957 ] 00:11:12.957 }' 00:11:12.957 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.957 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.526 04:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:13.526 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.526 04:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.526 [2024-11-18 04:00:10.042663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.526 BaseBdev3 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:13.526 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.527 [ 00:11:13.527 { 00:11:13.527 "name": "BaseBdev3", 00:11:13.527 "aliases": [ 00:11:13.527 "57ebb8ee-c931-4a2e-951a-3c1e39f34698" 00:11:13.527 ], 00:11:13.527 "product_name": "Malloc disk", 00:11:13.527 "block_size": 512, 00:11:13.527 "num_blocks": 65536, 00:11:13.527 "uuid": "57ebb8ee-c931-4a2e-951a-3c1e39f34698", 00:11:13.527 "assigned_rate_limits": { 00:11:13.527 "rw_ios_per_sec": 0, 00:11:13.527 "rw_mbytes_per_sec": 0, 00:11:13.527 "r_mbytes_per_sec": 0, 00:11:13.527 "w_mbytes_per_sec": 0 00:11:13.527 }, 00:11:13.527 "claimed": true, 00:11:13.527 "claim_type": "exclusive_write", 00:11:13.527 "zoned": false, 00:11:13.527 "supported_io_types": { 00:11:13.527 "read": true, 00:11:13.527 "write": true, 00:11:13.527 "unmap": true, 00:11:13.527 "flush": true, 00:11:13.527 "reset": true, 00:11:13.527 "nvme_admin": false, 00:11:13.527 "nvme_io": false, 00:11:13.527 "nvme_io_md": false, 00:11:13.527 "write_zeroes": true, 00:11:13.527 "zcopy": true, 00:11:13.527 "get_zone_info": false, 00:11:13.527 "zone_management": false, 00:11:13.527 "zone_append": false, 00:11:13.527 "compare": false, 00:11:13.527 "compare_and_write": false, 00:11:13.527 "abort": true, 00:11:13.527 "seek_hole": false, 00:11:13.527 "seek_data": false, 00:11:13.527 "copy": true, 00:11:13.527 "nvme_iov_md": false 00:11:13.527 }, 00:11:13.527 "memory_domains": [ 00:11:13.527 { 00:11:13.527 "dma_device_id": "system", 00:11:13.527 "dma_device_type": 1 00:11:13.527 }, 00:11:13.527 { 00:11:13.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.527 "dma_device_type": 2 00:11:13.527 } 00:11:13.527 ], 00:11:13.527 "driver_specific": {} 00:11:13.527 } 00:11:13.527 ] 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.527 "name": "Existed_Raid", 00:11:13.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.527 "strip_size_kb": 64, 00:11:13.527 "state": "configuring", 00:11:13.527 "raid_level": "concat", 00:11:13.527 "superblock": false, 00:11:13.527 "num_base_bdevs": 4, 00:11:13.527 "num_base_bdevs_discovered": 3, 00:11:13.527 "num_base_bdevs_operational": 4, 00:11:13.527 "base_bdevs_list": [ 00:11:13.527 { 00:11:13.527 "name": "BaseBdev1", 00:11:13.527 "uuid": "fa980740-745e-4ceb-ab02-b44551cf85ba", 00:11:13.527 "is_configured": true, 00:11:13.527 "data_offset": 0, 00:11:13.527 "data_size": 65536 00:11:13.527 }, 00:11:13.527 { 00:11:13.527 "name": "BaseBdev2", 00:11:13.527 "uuid": "ace88c85-7d3a-48ab-bc07-e47f167a8f29", 00:11:13.527 "is_configured": true, 00:11:13.527 "data_offset": 0, 00:11:13.527 "data_size": 65536 00:11:13.527 }, 00:11:13.527 { 00:11:13.527 "name": "BaseBdev3", 00:11:13.527 "uuid": "57ebb8ee-c931-4a2e-951a-3c1e39f34698", 00:11:13.527 "is_configured": true, 00:11:13.527 "data_offset": 0, 00:11:13.527 "data_size": 65536 00:11:13.527 }, 00:11:13.527 { 00:11:13.527 "name": "BaseBdev4", 00:11:13.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.527 "is_configured": false, 00:11:13.527 "data_offset": 0, 00:11:13.527 "data_size": 0 00:11:13.527 } 00:11:13.527 ] 00:11:13.527 }' 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.527 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.098 [2024-11-18 04:00:10.629315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.098 [2024-11-18 04:00:10.629368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:14.098 [2024-11-18 04:00:10.629376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:14.098 [2024-11-18 04:00:10.629662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:14.098 [2024-11-18 04:00:10.629828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:14.098 [2024-11-18 04:00:10.629863] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:14.098 [2024-11-18 04:00:10.630131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.098 BaseBdev4 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.098 [ 00:11:14.098 { 00:11:14.098 "name": "BaseBdev4", 00:11:14.098 "aliases": [ 00:11:14.098 "3ccedf08-c678-4422-8293-12d1ea8375c1" 00:11:14.098 ], 00:11:14.098 "product_name": "Malloc disk", 00:11:14.098 "block_size": 512, 00:11:14.098 "num_blocks": 65536, 00:11:14.098 "uuid": "3ccedf08-c678-4422-8293-12d1ea8375c1", 00:11:14.098 "assigned_rate_limits": { 00:11:14.098 "rw_ios_per_sec": 0, 00:11:14.098 "rw_mbytes_per_sec": 0, 00:11:14.098 "r_mbytes_per_sec": 0, 00:11:14.098 "w_mbytes_per_sec": 0 00:11:14.098 }, 00:11:14.098 "claimed": true, 00:11:14.098 "claim_type": "exclusive_write", 00:11:14.098 "zoned": false, 00:11:14.098 "supported_io_types": { 00:11:14.098 "read": true, 00:11:14.098 "write": true, 00:11:14.098 "unmap": true, 00:11:14.098 "flush": true, 00:11:14.098 "reset": true, 00:11:14.098 "nvme_admin": false, 00:11:14.098 "nvme_io": false, 00:11:14.098 "nvme_io_md": false, 00:11:14.098 "write_zeroes": true, 00:11:14.098 "zcopy": true, 00:11:14.098 "get_zone_info": false, 00:11:14.098 "zone_management": false, 00:11:14.098 "zone_append": false, 00:11:14.098 "compare": false, 00:11:14.098 "compare_and_write": false, 00:11:14.098 "abort": true, 00:11:14.098 "seek_hole": false, 00:11:14.098 "seek_data": false, 00:11:14.098 "copy": true, 00:11:14.098 "nvme_iov_md": false 00:11:14.098 }, 00:11:14.098 "memory_domains": [ 00:11:14.098 { 00:11:14.098 "dma_device_id": "system", 00:11:14.098 "dma_device_type": 1 00:11:14.098 }, 00:11:14.098 { 00:11:14.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.098 "dma_device_type": 2 00:11:14.098 } 00:11:14.098 ], 00:11:14.098 "driver_specific": {} 00:11:14.098 } 00:11:14.098 ] 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.098 "name": "Existed_Raid", 00:11:14.098 "uuid": "c198aca2-1e96-4d69-bb3d-d93b83921766", 00:11:14.098 "strip_size_kb": 64, 00:11:14.098 "state": "online", 00:11:14.098 "raid_level": "concat", 00:11:14.098 "superblock": false, 00:11:14.098 "num_base_bdevs": 4, 00:11:14.098 "num_base_bdevs_discovered": 4, 00:11:14.098 "num_base_bdevs_operational": 4, 00:11:14.098 "base_bdevs_list": [ 00:11:14.098 { 00:11:14.098 "name": "BaseBdev1", 00:11:14.098 "uuid": "fa980740-745e-4ceb-ab02-b44551cf85ba", 00:11:14.098 "is_configured": true, 00:11:14.098 "data_offset": 0, 00:11:14.098 "data_size": 65536 00:11:14.098 }, 00:11:14.098 { 00:11:14.098 "name": "BaseBdev2", 00:11:14.098 "uuid": "ace88c85-7d3a-48ab-bc07-e47f167a8f29", 00:11:14.098 "is_configured": true, 00:11:14.098 "data_offset": 0, 00:11:14.098 "data_size": 65536 00:11:14.098 }, 00:11:14.098 { 00:11:14.098 "name": "BaseBdev3", 00:11:14.098 "uuid": "57ebb8ee-c931-4a2e-951a-3c1e39f34698", 00:11:14.098 "is_configured": true, 00:11:14.098 "data_offset": 0, 00:11:14.098 "data_size": 65536 00:11:14.098 }, 00:11:14.098 { 00:11:14.098 "name": "BaseBdev4", 00:11:14.098 "uuid": "3ccedf08-c678-4422-8293-12d1ea8375c1", 00:11:14.098 "is_configured": true, 00:11:14.098 "data_offset": 0, 00:11:14.098 "data_size": 65536 00:11:14.098 } 00:11:14.098 ] 00:11:14.098 }' 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.098 04:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:14.668 [2024-11-18 04:00:11.112943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.668 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:14.668 "name": "Existed_Raid", 00:11:14.668 "aliases": [ 00:11:14.668 "c198aca2-1e96-4d69-bb3d-d93b83921766" 00:11:14.668 ], 00:11:14.668 "product_name": "Raid Volume", 00:11:14.668 "block_size": 512, 00:11:14.668 "num_blocks": 262144, 00:11:14.668 "uuid": "c198aca2-1e96-4d69-bb3d-d93b83921766", 00:11:14.669 "assigned_rate_limits": { 00:11:14.669 "rw_ios_per_sec": 0, 00:11:14.669 "rw_mbytes_per_sec": 0, 00:11:14.669 "r_mbytes_per_sec": 0, 00:11:14.669 "w_mbytes_per_sec": 0 00:11:14.669 }, 00:11:14.669 "claimed": false, 00:11:14.669 "zoned": false, 00:11:14.669 "supported_io_types": { 00:11:14.669 "read": true, 00:11:14.669 "write": true, 00:11:14.669 "unmap": true, 00:11:14.669 "flush": true, 00:11:14.669 "reset": true, 00:11:14.669 "nvme_admin": false, 00:11:14.669 "nvme_io": false, 00:11:14.669 "nvme_io_md": false, 00:11:14.669 "write_zeroes": true, 00:11:14.669 "zcopy": false, 00:11:14.669 "get_zone_info": false, 00:11:14.669 "zone_management": false, 00:11:14.669 "zone_append": false, 00:11:14.669 "compare": false, 00:11:14.669 "compare_and_write": false, 00:11:14.669 "abort": false, 00:11:14.669 "seek_hole": false, 00:11:14.669 "seek_data": false, 00:11:14.669 "copy": false, 00:11:14.669 "nvme_iov_md": false 00:11:14.669 }, 00:11:14.669 "memory_domains": [ 00:11:14.669 { 00:11:14.669 "dma_device_id": "system", 00:11:14.669 "dma_device_type": 1 00:11:14.669 }, 00:11:14.669 { 00:11:14.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.669 "dma_device_type": 2 00:11:14.669 }, 00:11:14.669 { 00:11:14.669 "dma_device_id": "system", 00:11:14.669 "dma_device_type": 1 00:11:14.669 }, 00:11:14.669 { 00:11:14.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.669 "dma_device_type": 2 00:11:14.669 }, 00:11:14.669 { 00:11:14.669 "dma_device_id": "system", 00:11:14.669 "dma_device_type": 1 00:11:14.669 }, 00:11:14.669 { 00:11:14.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.669 "dma_device_type": 2 00:11:14.669 }, 00:11:14.669 { 00:11:14.669 "dma_device_id": "system", 00:11:14.669 "dma_device_type": 1 00:11:14.669 }, 00:11:14.669 { 00:11:14.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.669 "dma_device_type": 2 00:11:14.669 } 00:11:14.669 ], 00:11:14.669 "driver_specific": { 00:11:14.669 "raid": { 00:11:14.669 "uuid": "c198aca2-1e96-4d69-bb3d-d93b83921766", 00:11:14.669 "strip_size_kb": 64, 00:11:14.669 "state": "online", 00:11:14.669 "raid_level": "concat", 00:11:14.669 "superblock": false, 00:11:14.669 "num_base_bdevs": 4, 00:11:14.669 "num_base_bdevs_discovered": 4, 00:11:14.669 "num_base_bdevs_operational": 4, 00:11:14.669 "base_bdevs_list": [ 00:11:14.669 { 00:11:14.669 "name": "BaseBdev1", 00:11:14.669 "uuid": "fa980740-745e-4ceb-ab02-b44551cf85ba", 00:11:14.669 "is_configured": true, 00:11:14.669 "data_offset": 0, 00:11:14.669 "data_size": 65536 00:11:14.669 }, 00:11:14.669 { 00:11:14.669 "name": "BaseBdev2", 00:11:14.669 "uuid": "ace88c85-7d3a-48ab-bc07-e47f167a8f29", 00:11:14.669 "is_configured": true, 00:11:14.669 "data_offset": 0, 00:11:14.669 "data_size": 65536 00:11:14.669 }, 00:11:14.669 { 00:11:14.669 "name": "BaseBdev3", 00:11:14.669 "uuid": "57ebb8ee-c931-4a2e-951a-3c1e39f34698", 00:11:14.669 "is_configured": true, 00:11:14.669 "data_offset": 0, 00:11:14.669 "data_size": 65536 00:11:14.669 }, 00:11:14.669 { 00:11:14.669 "name": "BaseBdev4", 00:11:14.669 "uuid": "3ccedf08-c678-4422-8293-12d1ea8375c1", 00:11:14.669 "is_configured": true, 00:11:14.669 "data_offset": 0, 00:11:14.669 "data_size": 65536 00:11:14.669 } 00:11:14.669 ] 00:11:14.669 } 00:11:14.669 } 00:11:14.669 }' 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:14.669 BaseBdev2 00:11:14.669 BaseBdev3 00:11:14.669 BaseBdev4' 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.669 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.929 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.930 [2024-11-18 04:00:11.420078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.930 [2024-11-18 04:00:11.420113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.930 [2024-11-18 04:00:11.420163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.930 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.190 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.190 "name": "Existed_Raid", 00:11:15.190 "uuid": "c198aca2-1e96-4d69-bb3d-d93b83921766", 00:11:15.190 "strip_size_kb": 64, 00:11:15.190 "state": "offline", 00:11:15.190 "raid_level": "concat", 00:11:15.190 "superblock": false, 00:11:15.190 "num_base_bdevs": 4, 00:11:15.190 "num_base_bdevs_discovered": 3, 00:11:15.190 "num_base_bdevs_operational": 3, 00:11:15.190 "base_bdevs_list": [ 00:11:15.190 { 00:11:15.190 "name": null, 00:11:15.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.190 "is_configured": false, 00:11:15.190 "data_offset": 0, 00:11:15.190 "data_size": 65536 00:11:15.190 }, 00:11:15.190 { 00:11:15.190 "name": "BaseBdev2", 00:11:15.190 "uuid": "ace88c85-7d3a-48ab-bc07-e47f167a8f29", 00:11:15.190 "is_configured": true, 00:11:15.190 "data_offset": 0, 00:11:15.190 "data_size": 65536 00:11:15.190 }, 00:11:15.190 { 00:11:15.190 "name": "BaseBdev3", 00:11:15.190 "uuid": "57ebb8ee-c931-4a2e-951a-3c1e39f34698", 00:11:15.190 "is_configured": true, 00:11:15.190 "data_offset": 0, 00:11:15.190 "data_size": 65536 00:11:15.190 }, 00:11:15.190 { 00:11:15.190 "name": "BaseBdev4", 00:11:15.190 "uuid": "3ccedf08-c678-4422-8293-12d1ea8375c1", 00:11:15.190 "is_configured": true, 00:11:15.190 "data_offset": 0, 00:11:15.190 "data_size": 65536 00:11:15.190 } 00:11:15.190 ] 00:11:15.190 }' 00:11:15.190 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.190 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.450 04:00:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.450 [2024-11-18 04:00:11.999736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.710 [2024-11-18 04:00:12.161598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.710 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.710 [2024-11-18 04:00:12.326213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:15.710 [2024-11-18 04:00:12.326286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.971 BaseBdev2 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.971 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.971 [ 00:11:15.971 { 00:11:15.971 "name": "BaseBdev2", 00:11:15.971 "aliases": [ 00:11:15.971 "971bfce7-20b0-4bed-ad69-c01d63f22394" 00:11:15.971 ], 00:11:15.971 "product_name": "Malloc disk", 00:11:15.971 "block_size": 512, 00:11:15.971 "num_blocks": 65536, 00:11:15.971 "uuid": "971bfce7-20b0-4bed-ad69-c01d63f22394", 00:11:15.971 "assigned_rate_limits": { 00:11:15.971 "rw_ios_per_sec": 0, 00:11:15.971 "rw_mbytes_per_sec": 0, 00:11:15.971 "r_mbytes_per_sec": 0, 00:11:15.971 "w_mbytes_per_sec": 0 00:11:15.971 }, 00:11:15.971 "claimed": false, 00:11:15.971 "zoned": false, 00:11:15.971 "supported_io_types": { 00:11:15.971 "read": true, 00:11:15.971 "write": true, 00:11:15.971 "unmap": true, 00:11:15.971 "flush": true, 00:11:15.971 "reset": true, 00:11:15.971 "nvme_admin": false, 00:11:15.971 "nvme_io": false, 00:11:15.971 "nvme_io_md": false, 00:11:15.971 "write_zeroes": true, 00:11:15.971 "zcopy": true, 00:11:15.971 "get_zone_info": false, 00:11:15.971 "zone_management": false, 00:11:15.971 "zone_append": false, 00:11:15.971 "compare": false, 00:11:15.971 "compare_and_write": false, 00:11:15.971 "abort": true, 00:11:15.971 "seek_hole": false, 00:11:15.971 "seek_data": false, 00:11:15.971 "copy": true, 00:11:15.972 "nvme_iov_md": false 00:11:15.972 }, 00:11:15.972 "memory_domains": [ 00:11:15.972 { 00:11:15.972 "dma_device_id": "system", 00:11:15.972 "dma_device_type": 1 00:11:15.972 }, 00:11:15.972 { 00:11:15.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.972 "dma_device_type": 2 00:11:15.972 } 00:11:15.972 ], 00:11:15.972 "driver_specific": {} 00:11:15.972 } 00:11:15.972 ] 00:11:15.972 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.972 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.972 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.972 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.972 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.972 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.972 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.232 BaseBdev3 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.232 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.232 [ 00:11:16.232 { 00:11:16.232 "name": "BaseBdev3", 00:11:16.232 "aliases": [ 00:11:16.232 "d4ecf670-717f-4ea9-b504-b3cfe4840cdf" 00:11:16.232 ], 00:11:16.232 "product_name": "Malloc disk", 00:11:16.232 "block_size": 512, 00:11:16.232 "num_blocks": 65536, 00:11:16.232 "uuid": "d4ecf670-717f-4ea9-b504-b3cfe4840cdf", 00:11:16.232 "assigned_rate_limits": { 00:11:16.232 "rw_ios_per_sec": 0, 00:11:16.232 "rw_mbytes_per_sec": 0, 00:11:16.232 "r_mbytes_per_sec": 0, 00:11:16.232 "w_mbytes_per_sec": 0 00:11:16.232 }, 00:11:16.232 "claimed": false, 00:11:16.232 "zoned": false, 00:11:16.232 "supported_io_types": { 00:11:16.232 "read": true, 00:11:16.233 "write": true, 00:11:16.233 "unmap": true, 00:11:16.233 "flush": true, 00:11:16.233 "reset": true, 00:11:16.233 "nvme_admin": false, 00:11:16.233 "nvme_io": false, 00:11:16.233 "nvme_io_md": false, 00:11:16.233 "write_zeroes": true, 00:11:16.233 "zcopy": true, 00:11:16.233 "get_zone_info": false, 00:11:16.233 "zone_management": false, 00:11:16.233 "zone_append": false, 00:11:16.233 "compare": false, 00:11:16.233 "compare_and_write": false, 00:11:16.233 "abort": true, 00:11:16.233 "seek_hole": false, 00:11:16.233 "seek_data": false, 00:11:16.233 "copy": true, 00:11:16.233 "nvme_iov_md": false 00:11:16.233 }, 00:11:16.233 "memory_domains": [ 00:11:16.233 { 00:11:16.233 "dma_device_id": "system", 00:11:16.233 "dma_device_type": 1 00:11:16.233 }, 00:11:16.233 { 00:11:16.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.233 "dma_device_type": 2 00:11:16.233 } 00:11:16.233 ], 00:11:16.233 "driver_specific": {} 00:11:16.233 } 00:11:16.233 ] 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.233 BaseBdev4 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.233 [ 00:11:16.233 { 00:11:16.233 "name": "BaseBdev4", 00:11:16.233 "aliases": [ 00:11:16.233 "f6a63848-e942-47dd-9af6-912e277d1f12" 00:11:16.233 ], 00:11:16.233 "product_name": "Malloc disk", 00:11:16.233 "block_size": 512, 00:11:16.233 "num_blocks": 65536, 00:11:16.233 "uuid": "f6a63848-e942-47dd-9af6-912e277d1f12", 00:11:16.233 "assigned_rate_limits": { 00:11:16.233 "rw_ios_per_sec": 0, 00:11:16.233 "rw_mbytes_per_sec": 0, 00:11:16.233 "r_mbytes_per_sec": 0, 00:11:16.233 "w_mbytes_per_sec": 0 00:11:16.233 }, 00:11:16.233 "claimed": false, 00:11:16.233 "zoned": false, 00:11:16.233 "supported_io_types": { 00:11:16.233 "read": true, 00:11:16.233 "write": true, 00:11:16.233 "unmap": true, 00:11:16.233 "flush": true, 00:11:16.233 "reset": true, 00:11:16.233 "nvme_admin": false, 00:11:16.233 "nvme_io": false, 00:11:16.233 "nvme_io_md": false, 00:11:16.233 "write_zeroes": true, 00:11:16.233 "zcopy": true, 00:11:16.233 "get_zone_info": false, 00:11:16.233 "zone_management": false, 00:11:16.233 "zone_append": false, 00:11:16.233 "compare": false, 00:11:16.233 "compare_and_write": false, 00:11:16.233 "abort": true, 00:11:16.233 "seek_hole": false, 00:11:16.233 "seek_data": false, 00:11:16.233 "copy": true, 00:11:16.233 "nvme_iov_md": false 00:11:16.233 }, 00:11:16.233 "memory_domains": [ 00:11:16.233 { 00:11:16.233 "dma_device_id": "system", 00:11:16.233 "dma_device_type": 1 00:11:16.233 }, 00:11:16.233 { 00:11:16.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.233 "dma_device_type": 2 00:11:16.233 } 00:11:16.233 ], 00:11:16.233 "driver_specific": {} 00:11:16.233 } 00:11:16.233 ] 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.233 [2024-11-18 04:00:12.739752] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.233 [2024-11-18 04:00:12.739898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.233 [2024-11-18 04:00:12.739944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.233 [2024-11-18 04:00:12.742049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.233 [2024-11-18 04:00:12.742143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.233 "name": "Existed_Raid", 00:11:16.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.233 "strip_size_kb": 64, 00:11:16.233 "state": "configuring", 00:11:16.233 "raid_level": "concat", 00:11:16.233 "superblock": false, 00:11:16.233 "num_base_bdevs": 4, 00:11:16.233 "num_base_bdevs_discovered": 3, 00:11:16.233 "num_base_bdevs_operational": 4, 00:11:16.233 "base_bdevs_list": [ 00:11:16.233 { 00:11:16.233 "name": "BaseBdev1", 00:11:16.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.233 "is_configured": false, 00:11:16.233 "data_offset": 0, 00:11:16.233 "data_size": 0 00:11:16.233 }, 00:11:16.233 { 00:11:16.233 "name": "BaseBdev2", 00:11:16.233 "uuid": "971bfce7-20b0-4bed-ad69-c01d63f22394", 00:11:16.233 "is_configured": true, 00:11:16.233 "data_offset": 0, 00:11:16.233 "data_size": 65536 00:11:16.233 }, 00:11:16.233 { 00:11:16.233 "name": "BaseBdev3", 00:11:16.233 "uuid": "d4ecf670-717f-4ea9-b504-b3cfe4840cdf", 00:11:16.233 "is_configured": true, 00:11:16.233 "data_offset": 0, 00:11:16.233 "data_size": 65536 00:11:16.233 }, 00:11:16.233 { 00:11:16.233 "name": "BaseBdev4", 00:11:16.233 "uuid": "f6a63848-e942-47dd-9af6-912e277d1f12", 00:11:16.233 "is_configured": true, 00:11:16.233 "data_offset": 0, 00:11:16.233 "data_size": 65536 00:11:16.233 } 00:11:16.233 ] 00:11:16.233 }' 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.233 04:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 [2024-11-18 04:00:13.151206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.803 "name": "Existed_Raid", 00:11:16.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.803 "strip_size_kb": 64, 00:11:16.803 "state": "configuring", 00:11:16.803 "raid_level": "concat", 00:11:16.803 "superblock": false, 00:11:16.803 "num_base_bdevs": 4, 00:11:16.803 "num_base_bdevs_discovered": 2, 00:11:16.803 "num_base_bdevs_operational": 4, 00:11:16.803 "base_bdevs_list": [ 00:11:16.803 { 00:11:16.803 "name": "BaseBdev1", 00:11:16.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.803 "is_configured": false, 00:11:16.803 "data_offset": 0, 00:11:16.803 "data_size": 0 00:11:16.803 }, 00:11:16.803 { 00:11:16.803 "name": null, 00:11:16.803 "uuid": "971bfce7-20b0-4bed-ad69-c01d63f22394", 00:11:16.803 "is_configured": false, 00:11:16.803 "data_offset": 0, 00:11:16.803 "data_size": 65536 00:11:16.803 }, 00:11:16.803 { 00:11:16.803 "name": "BaseBdev3", 00:11:16.803 "uuid": "d4ecf670-717f-4ea9-b504-b3cfe4840cdf", 00:11:16.803 "is_configured": true, 00:11:16.803 "data_offset": 0, 00:11:16.803 "data_size": 65536 00:11:16.803 }, 00:11:16.803 { 00:11:16.803 "name": "BaseBdev4", 00:11:16.803 "uuid": "f6a63848-e942-47dd-9af6-912e277d1f12", 00:11:16.803 "is_configured": true, 00:11:16.803 "data_offset": 0, 00:11:16.803 "data_size": 65536 00:11:16.803 } 00:11:16.803 ] 00:11:16.803 }' 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.803 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.064 [2024-11-18 04:00:13.656448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:17.064 BaseBdev1 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.064 [ 00:11:17.064 { 00:11:17.064 "name": "BaseBdev1", 00:11:17.064 "aliases": [ 00:11:17.064 "98e0a7f1-1c4a-4f01-9de4-8057461ae9d3" 00:11:17.064 ], 00:11:17.064 "product_name": "Malloc disk", 00:11:17.064 "block_size": 512, 00:11:17.064 "num_blocks": 65536, 00:11:17.064 "uuid": "98e0a7f1-1c4a-4f01-9de4-8057461ae9d3", 00:11:17.064 "assigned_rate_limits": { 00:11:17.064 "rw_ios_per_sec": 0, 00:11:17.064 "rw_mbytes_per_sec": 0, 00:11:17.064 "r_mbytes_per_sec": 0, 00:11:17.064 "w_mbytes_per_sec": 0 00:11:17.064 }, 00:11:17.064 "claimed": true, 00:11:17.064 "claim_type": "exclusive_write", 00:11:17.064 "zoned": false, 00:11:17.064 "supported_io_types": { 00:11:17.064 "read": true, 00:11:17.064 "write": true, 00:11:17.064 "unmap": true, 00:11:17.064 "flush": true, 00:11:17.064 "reset": true, 00:11:17.064 "nvme_admin": false, 00:11:17.064 "nvme_io": false, 00:11:17.064 "nvme_io_md": false, 00:11:17.064 "write_zeroes": true, 00:11:17.064 "zcopy": true, 00:11:17.064 "get_zone_info": false, 00:11:17.064 "zone_management": false, 00:11:17.064 "zone_append": false, 00:11:17.064 "compare": false, 00:11:17.064 "compare_and_write": false, 00:11:17.064 "abort": true, 00:11:17.064 "seek_hole": false, 00:11:17.064 "seek_data": false, 00:11:17.064 "copy": true, 00:11:17.064 "nvme_iov_md": false 00:11:17.064 }, 00:11:17.064 "memory_domains": [ 00:11:17.064 { 00:11:17.064 "dma_device_id": "system", 00:11:17.064 "dma_device_type": 1 00:11:17.064 }, 00:11:17.064 { 00:11:17.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.064 "dma_device_type": 2 00:11:17.064 } 00:11:17.064 ], 00:11:17.064 "driver_specific": {} 00:11:17.064 } 00:11:17.064 ] 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.064 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.065 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.065 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.065 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.065 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.065 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.065 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.325 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.325 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.325 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.325 "name": "Existed_Raid", 00:11:17.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.325 "strip_size_kb": 64, 00:11:17.325 "state": "configuring", 00:11:17.325 "raid_level": "concat", 00:11:17.325 "superblock": false, 00:11:17.325 "num_base_bdevs": 4, 00:11:17.325 "num_base_bdevs_discovered": 3, 00:11:17.325 "num_base_bdevs_operational": 4, 00:11:17.325 "base_bdevs_list": [ 00:11:17.325 { 00:11:17.325 "name": "BaseBdev1", 00:11:17.325 "uuid": "98e0a7f1-1c4a-4f01-9de4-8057461ae9d3", 00:11:17.325 "is_configured": true, 00:11:17.325 "data_offset": 0, 00:11:17.325 "data_size": 65536 00:11:17.325 }, 00:11:17.325 { 00:11:17.325 "name": null, 00:11:17.325 "uuid": "971bfce7-20b0-4bed-ad69-c01d63f22394", 00:11:17.325 "is_configured": false, 00:11:17.325 "data_offset": 0, 00:11:17.325 "data_size": 65536 00:11:17.325 }, 00:11:17.325 { 00:11:17.325 "name": "BaseBdev3", 00:11:17.325 "uuid": "d4ecf670-717f-4ea9-b504-b3cfe4840cdf", 00:11:17.325 "is_configured": true, 00:11:17.325 "data_offset": 0, 00:11:17.325 "data_size": 65536 00:11:17.325 }, 00:11:17.325 { 00:11:17.325 "name": "BaseBdev4", 00:11:17.325 "uuid": "f6a63848-e942-47dd-9af6-912e277d1f12", 00:11:17.325 "is_configured": true, 00:11:17.325 "data_offset": 0, 00:11:17.325 "data_size": 65536 00:11:17.325 } 00:11:17.325 ] 00:11:17.325 }' 00:11:17.325 04:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.325 04:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.585 [2024-11-18 04:00:14.187671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.585 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.845 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.845 "name": "Existed_Raid", 00:11:17.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.845 "strip_size_kb": 64, 00:11:17.845 "state": "configuring", 00:11:17.845 "raid_level": "concat", 00:11:17.845 "superblock": false, 00:11:17.845 "num_base_bdevs": 4, 00:11:17.845 "num_base_bdevs_discovered": 2, 00:11:17.845 "num_base_bdevs_operational": 4, 00:11:17.845 "base_bdevs_list": [ 00:11:17.845 { 00:11:17.845 "name": "BaseBdev1", 00:11:17.845 "uuid": "98e0a7f1-1c4a-4f01-9de4-8057461ae9d3", 00:11:17.845 "is_configured": true, 00:11:17.845 "data_offset": 0, 00:11:17.845 "data_size": 65536 00:11:17.845 }, 00:11:17.845 { 00:11:17.845 "name": null, 00:11:17.845 "uuid": "971bfce7-20b0-4bed-ad69-c01d63f22394", 00:11:17.845 "is_configured": false, 00:11:17.845 "data_offset": 0, 00:11:17.845 "data_size": 65536 00:11:17.845 }, 00:11:17.845 { 00:11:17.845 "name": null, 00:11:17.845 "uuid": "d4ecf670-717f-4ea9-b504-b3cfe4840cdf", 00:11:17.845 "is_configured": false, 00:11:17.845 "data_offset": 0, 00:11:17.845 "data_size": 65536 00:11:17.845 }, 00:11:17.845 { 00:11:17.845 "name": "BaseBdev4", 00:11:17.845 "uuid": "f6a63848-e942-47dd-9af6-912e277d1f12", 00:11:17.845 "is_configured": true, 00:11:17.845 "data_offset": 0, 00:11:17.845 "data_size": 65536 00:11:17.845 } 00:11:17.845 ] 00:11:17.845 }' 00:11:17.845 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.845 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.112 [2024-11-18 04:00:14.646887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.112 "name": "Existed_Raid", 00:11:18.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.112 "strip_size_kb": 64, 00:11:18.112 "state": "configuring", 00:11:18.112 "raid_level": "concat", 00:11:18.112 "superblock": false, 00:11:18.112 "num_base_bdevs": 4, 00:11:18.112 "num_base_bdevs_discovered": 3, 00:11:18.112 "num_base_bdevs_operational": 4, 00:11:18.112 "base_bdevs_list": [ 00:11:18.112 { 00:11:18.112 "name": "BaseBdev1", 00:11:18.112 "uuid": "98e0a7f1-1c4a-4f01-9de4-8057461ae9d3", 00:11:18.112 "is_configured": true, 00:11:18.112 "data_offset": 0, 00:11:18.112 "data_size": 65536 00:11:18.112 }, 00:11:18.112 { 00:11:18.112 "name": null, 00:11:18.112 "uuid": "971bfce7-20b0-4bed-ad69-c01d63f22394", 00:11:18.112 "is_configured": false, 00:11:18.112 "data_offset": 0, 00:11:18.112 "data_size": 65536 00:11:18.112 }, 00:11:18.112 { 00:11:18.112 "name": "BaseBdev3", 00:11:18.112 "uuid": "d4ecf670-717f-4ea9-b504-b3cfe4840cdf", 00:11:18.112 "is_configured": true, 00:11:18.112 "data_offset": 0, 00:11:18.112 "data_size": 65536 00:11:18.112 }, 00:11:18.112 { 00:11:18.112 "name": "BaseBdev4", 00:11:18.112 "uuid": "f6a63848-e942-47dd-9af6-912e277d1f12", 00:11:18.112 "is_configured": true, 00:11:18.112 "data_offset": 0, 00:11:18.112 "data_size": 65536 00:11:18.112 } 00:11:18.112 ] 00:11:18.112 }' 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.112 04:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.694 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:18.694 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.694 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.694 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.694 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.694 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:18.694 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:18.694 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.694 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.694 [2024-11-18 04:00:15.126092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.695 "name": "Existed_Raid", 00:11:18.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.695 "strip_size_kb": 64, 00:11:18.695 "state": "configuring", 00:11:18.695 "raid_level": "concat", 00:11:18.695 "superblock": false, 00:11:18.695 "num_base_bdevs": 4, 00:11:18.695 "num_base_bdevs_discovered": 2, 00:11:18.695 "num_base_bdevs_operational": 4, 00:11:18.695 "base_bdevs_list": [ 00:11:18.695 { 00:11:18.695 "name": null, 00:11:18.695 "uuid": "98e0a7f1-1c4a-4f01-9de4-8057461ae9d3", 00:11:18.695 "is_configured": false, 00:11:18.695 "data_offset": 0, 00:11:18.695 "data_size": 65536 00:11:18.695 }, 00:11:18.695 { 00:11:18.695 "name": null, 00:11:18.695 "uuid": "971bfce7-20b0-4bed-ad69-c01d63f22394", 00:11:18.695 "is_configured": false, 00:11:18.695 "data_offset": 0, 00:11:18.695 "data_size": 65536 00:11:18.695 }, 00:11:18.695 { 00:11:18.695 "name": "BaseBdev3", 00:11:18.695 "uuid": "d4ecf670-717f-4ea9-b504-b3cfe4840cdf", 00:11:18.695 "is_configured": true, 00:11:18.695 "data_offset": 0, 00:11:18.695 "data_size": 65536 00:11:18.695 }, 00:11:18.695 { 00:11:18.695 "name": "BaseBdev4", 00:11:18.695 "uuid": "f6a63848-e942-47dd-9af6-912e277d1f12", 00:11:18.695 "is_configured": true, 00:11:18.695 "data_offset": 0, 00:11:18.695 "data_size": 65536 00:11:18.695 } 00:11:18.695 ] 00:11:18.695 }' 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.695 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.264 [2024-11-18 04:00:15.753547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.264 "name": "Existed_Raid", 00:11:19.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.264 "strip_size_kb": 64, 00:11:19.264 "state": "configuring", 00:11:19.264 "raid_level": "concat", 00:11:19.264 "superblock": false, 00:11:19.264 "num_base_bdevs": 4, 00:11:19.264 "num_base_bdevs_discovered": 3, 00:11:19.264 "num_base_bdevs_operational": 4, 00:11:19.264 "base_bdevs_list": [ 00:11:19.264 { 00:11:19.264 "name": null, 00:11:19.264 "uuid": "98e0a7f1-1c4a-4f01-9de4-8057461ae9d3", 00:11:19.264 "is_configured": false, 00:11:19.264 "data_offset": 0, 00:11:19.264 "data_size": 65536 00:11:19.264 }, 00:11:19.264 { 00:11:19.264 "name": "BaseBdev2", 00:11:19.264 "uuid": "971bfce7-20b0-4bed-ad69-c01d63f22394", 00:11:19.264 "is_configured": true, 00:11:19.264 "data_offset": 0, 00:11:19.264 "data_size": 65536 00:11:19.264 }, 00:11:19.264 { 00:11:19.264 "name": "BaseBdev3", 00:11:19.264 "uuid": "d4ecf670-717f-4ea9-b504-b3cfe4840cdf", 00:11:19.264 "is_configured": true, 00:11:19.264 "data_offset": 0, 00:11:19.264 "data_size": 65536 00:11:19.264 }, 00:11:19.264 { 00:11:19.264 "name": "BaseBdev4", 00:11:19.264 "uuid": "f6a63848-e942-47dd-9af6-912e277d1f12", 00:11:19.264 "is_configured": true, 00:11:19.264 "data_offset": 0, 00:11:19.264 "data_size": 65536 00:11:19.264 } 00:11:19.264 ] 00:11:19.264 }' 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.264 04:00:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 98e0a7f1-1c4a-4f01-9de4-8057461ae9d3 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.834 [2024-11-18 04:00:16.371864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:19.834 [2024-11-18 04:00:16.372013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:19.834 [2024-11-18 04:00:16.372039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:19.834 [2024-11-18 04:00:16.372364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:19.834 [2024-11-18 04:00:16.372565] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:19.834 [2024-11-18 04:00:16.372608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:19.834 [2024-11-18 04:00:16.372918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.834 NewBaseBdev 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.834 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.835 [ 00:11:19.835 { 00:11:19.835 "name": "NewBaseBdev", 00:11:19.835 "aliases": [ 00:11:19.835 "98e0a7f1-1c4a-4f01-9de4-8057461ae9d3" 00:11:19.835 ], 00:11:19.835 "product_name": "Malloc disk", 00:11:19.835 "block_size": 512, 00:11:19.835 "num_blocks": 65536, 00:11:19.835 "uuid": "98e0a7f1-1c4a-4f01-9de4-8057461ae9d3", 00:11:19.835 "assigned_rate_limits": { 00:11:19.835 "rw_ios_per_sec": 0, 00:11:19.835 "rw_mbytes_per_sec": 0, 00:11:19.835 "r_mbytes_per_sec": 0, 00:11:19.835 "w_mbytes_per_sec": 0 00:11:19.835 }, 00:11:19.835 "claimed": true, 00:11:19.835 "claim_type": "exclusive_write", 00:11:19.835 "zoned": false, 00:11:19.835 "supported_io_types": { 00:11:19.835 "read": true, 00:11:19.835 "write": true, 00:11:19.835 "unmap": true, 00:11:19.835 "flush": true, 00:11:19.835 "reset": true, 00:11:19.835 "nvme_admin": false, 00:11:19.835 "nvme_io": false, 00:11:19.835 "nvme_io_md": false, 00:11:19.835 "write_zeroes": true, 00:11:19.835 "zcopy": true, 00:11:19.835 "get_zone_info": false, 00:11:19.835 "zone_management": false, 00:11:19.835 "zone_append": false, 00:11:19.835 "compare": false, 00:11:19.835 "compare_and_write": false, 00:11:19.835 "abort": true, 00:11:19.835 "seek_hole": false, 00:11:19.835 "seek_data": false, 00:11:19.835 "copy": true, 00:11:19.835 "nvme_iov_md": false 00:11:19.835 }, 00:11:19.835 "memory_domains": [ 00:11:19.835 { 00:11:19.835 "dma_device_id": "system", 00:11:19.835 "dma_device_type": 1 00:11:19.835 }, 00:11:19.835 { 00:11:19.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.835 "dma_device_type": 2 00:11:19.835 } 00:11:19.835 ], 00:11:19.835 "driver_specific": {} 00:11:19.835 } 00:11:19.835 ] 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.835 "name": "Existed_Raid", 00:11:19.835 "uuid": "fec83a77-437a-459c-bc9c-a071445e0e21", 00:11:19.835 "strip_size_kb": 64, 00:11:19.835 "state": "online", 00:11:19.835 "raid_level": "concat", 00:11:19.835 "superblock": false, 00:11:19.835 "num_base_bdevs": 4, 00:11:19.835 "num_base_bdevs_discovered": 4, 00:11:19.835 "num_base_bdevs_operational": 4, 00:11:19.835 "base_bdevs_list": [ 00:11:19.835 { 00:11:19.835 "name": "NewBaseBdev", 00:11:19.835 "uuid": "98e0a7f1-1c4a-4f01-9de4-8057461ae9d3", 00:11:19.835 "is_configured": true, 00:11:19.835 "data_offset": 0, 00:11:19.835 "data_size": 65536 00:11:19.835 }, 00:11:19.835 { 00:11:19.835 "name": "BaseBdev2", 00:11:19.835 "uuid": "971bfce7-20b0-4bed-ad69-c01d63f22394", 00:11:19.835 "is_configured": true, 00:11:19.835 "data_offset": 0, 00:11:19.835 "data_size": 65536 00:11:19.835 }, 00:11:19.835 { 00:11:19.835 "name": "BaseBdev3", 00:11:19.835 "uuid": "d4ecf670-717f-4ea9-b504-b3cfe4840cdf", 00:11:19.835 "is_configured": true, 00:11:19.835 "data_offset": 0, 00:11:19.835 "data_size": 65536 00:11:19.835 }, 00:11:19.835 { 00:11:19.835 "name": "BaseBdev4", 00:11:19.835 "uuid": "f6a63848-e942-47dd-9af6-912e277d1f12", 00:11:19.835 "is_configured": true, 00:11:19.835 "data_offset": 0, 00:11:19.835 "data_size": 65536 00:11:19.835 } 00:11:19.835 ] 00:11:19.835 }' 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.835 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.405 [2024-11-18 04:00:16.883534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.405 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.405 "name": "Existed_Raid", 00:11:20.405 "aliases": [ 00:11:20.405 "fec83a77-437a-459c-bc9c-a071445e0e21" 00:11:20.405 ], 00:11:20.405 "product_name": "Raid Volume", 00:11:20.405 "block_size": 512, 00:11:20.405 "num_blocks": 262144, 00:11:20.405 "uuid": "fec83a77-437a-459c-bc9c-a071445e0e21", 00:11:20.405 "assigned_rate_limits": { 00:11:20.405 "rw_ios_per_sec": 0, 00:11:20.405 "rw_mbytes_per_sec": 0, 00:11:20.405 "r_mbytes_per_sec": 0, 00:11:20.405 "w_mbytes_per_sec": 0 00:11:20.405 }, 00:11:20.405 "claimed": false, 00:11:20.405 "zoned": false, 00:11:20.405 "supported_io_types": { 00:11:20.405 "read": true, 00:11:20.405 "write": true, 00:11:20.405 "unmap": true, 00:11:20.405 "flush": true, 00:11:20.405 "reset": true, 00:11:20.405 "nvme_admin": false, 00:11:20.405 "nvme_io": false, 00:11:20.405 "nvme_io_md": false, 00:11:20.405 "write_zeroes": true, 00:11:20.405 "zcopy": false, 00:11:20.405 "get_zone_info": false, 00:11:20.405 "zone_management": false, 00:11:20.405 "zone_append": false, 00:11:20.405 "compare": false, 00:11:20.405 "compare_and_write": false, 00:11:20.405 "abort": false, 00:11:20.405 "seek_hole": false, 00:11:20.405 "seek_data": false, 00:11:20.405 "copy": false, 00:11:20.405 "nvme_iov_md": false 00:11:20.405 }, 00:11:20.405 "memory_domains": [ 00:11:20.405 { 00:11:20.405 "dma_device_id": "system", 00:11:20.405 "dma_device_type": 1 00:11:20.405 }, 00:11:20.405 { 00:11:20.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.405 "dma_device_type": 2 00:11:20.405 }, 00:11:20.405 { 00:11:20.405 "dma_device_id": "system", 00:11:20.405 "dma_device_type": 1 00:11:20.405 }, 00:11:20.405 { 00:11:20.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.405 "dma_device_type": 2 00:11:20.405 }, 00:11:20.405 { 00:11:20.405 "dma_device_id": "system", 00:11:20.405 "dma_device_type": 1 00:11:20.405 }, 00:11:20.405 { 00:11:20.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.405 "dma_device_type": 2 00:11:20.405 }, 00:11:20.405 { 00:11:20.405 "dma_device_id": "system", 00:11:20.405 "dma_device_type": 1 00:11:20.405 }, 00:11:20.405 { 00:11:20.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.405 "dma_device_type": 2 00:11:20.405 } 00:11:20.405 ], 00:11:20.405 "driver_specific": { 00:11:20.405 "raid": { 00:11:20.405 "uuid": "fec83a77-437a-459c-bc9c-a071445e0e21", 00:11:20.405 "strip_size_kb": 64, 00:11:20.405 "state": "online", 00:11:20.405 "raid_level": "concat", 00:11:20.405 "superblock": false, 00:11:20.405 "num_base_bdevs": 4, 00:11:20.405 "num_base_bdevs_discovered": 4, 00:11:20.405 "num_base_bdevs_operational": 4, 00:11:20.405 "base_bdevs_list": [ 00:11:20.405 { 00:11:20.405 "name": "NewBaseBdev", 00:11:20.405 "uuid": "98e0a7f1-1c4a-4f01-9de4-8057461ae9d3", 00:11:20.405 "is_configured": true, 00:11:20.405 "data_offset": 0, 00:11:20.405 "data_size": 65536 00:11:20.405 }, 00:11:20.405 { 00:11:20.405 "name": "BaseBdev2", 00:11:20.405 "uuid": "971bfce7-20b0-4bed-ad69-c01d63f22394", 00:11:20.406 "is_configured": true, 00:11:20.406 "data_offset": 0, 00:11:20.406 "data_size": 65536 00:11:20.406 }, 00:11:20.406 { 00:11:20.406 "name": "BaseBdev3", 00:11:20.406 "uuid": "d4ecf670-717f-4ea9-b504-b3cfe4840cdf", 00:11:20.406 "is_configured": true, 00:11:20.406 "data_offset": 0, 00:11:20.406 "data_size": 65536 00:11:20.406 }, 00:11:20.406 { 00:11:20.406 "name": "BaseBdev4", 00:11:20.406 "uuid": "f6a63848-e942-47dd-9af6-912e277d1f12", 00:11:20.406 "is_configured": true, 00:11:20.406 "data_offset": 0, 00:11:20.406 "data_size": 65536 00:11:20.406 } 00:11:20.406 ] 00:11:20.406 } 00:11:20.406 } 00:11:20.406 }' 00:11:20.406 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.406 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:20.406 BaseBdev2 00:11:20.406 BaseBdev3 00:11:20.406 BaseBdev4' 00:11:20.406 04:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.406 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.666 [2024-11-18 04:00:17.198541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:20.666 [2024-11-18 04:00:17.198666] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.666 [2024-11-18 04:00:17.198760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.666 [2024-11-18 04:00:17.198854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.666 [2024-11-18 04:00:17.198867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71266 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71266 ']' 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71266 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71266 00:11:20.666 killing process with pid 71266 00:11:20.666 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.667 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.667 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71266' 00:11:20.667 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71266 00:11:20.667 [2024-11-18 04:00:17.247166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:20.667 04:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71266 00:11:21.236 [2024-11-18 04:00:17.671492] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.617 04:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:22.617 00:11:22.617 real 0m11.809s 00:11:22.617 user 0m18.503s 00:11:22.618 sys 0m2.257s 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.618 ************************************ 00:11:22.618 END TEST raid_state_function_test 00:11:22.618 ************************************ 00:11:22.618 04:00:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:22.618 04:00:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:22.618 04:00:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.618 04:00:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.618 ************************************ 00:11:22.618 START TEST raid_state_function_test_sb 00:11:22.618 ************************************ 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:22.618 Process raid pid: 71940 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71940 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71940' 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71940 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71940 ']' 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.618 04:00:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.618 [2024-11-18 04:00:19.050645] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:22.618 [2024-11-18 04:00:19.050794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.618 [2024-11-18 04:00:19.228013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.876 [2024-11-18 04:00:19.366722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.135 [2024-11-18 04:00:19.601943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.135 [2024-11-18 04:00:19.601985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.394 [2024-11-18 04:00:19.879045] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.394 [2024-11-18 04:00:19.879203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.394 [2024-11-18 04:00:19.879218] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.394 [2024-11-18 04:00:19.879229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.394 [2024-11-18 04:00:19.879235] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.394 [2024-11-18 04:00:19.879244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.394 [2024-11-18 04:00:19.879250] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:23.394 [2024-11-18 04:00:19.879259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.394 "name": "Existed_Raid", 00:11:23.394 "uuid": "34c89039-e127-45c4-8643-5335aebf101a", 00:11:23.394 "strip_size_kb": 64, 00:11:23.394 "state": "configuring", 00:11:23.394 "raid_level": "concat", 00:11:23.394 "superblock": true, 00:11:23.394 "num_base_bdevs": 4, 00:11:23.394 "num_base_bdevs_discovered": 0, 00:11:23.394 "num_base_bdevs_operational": 4, 00:11:23.394 "base_bdevs_list": [ 00:11:23.394 { 00:11:23.394 "name": "BaseBdev1", 00:11:23.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.394 "is_configured": false, 00:11:23.394 "data_offset": 0, 00:11:23.394 "data_size": 0 00:11:23.394 }, 00:11:23.394 { 00:11:23.394 "name": "BaseBdev2", 00:11:23.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.394 "is_configured": false, 00:11:23.394 "data_offset": 0, 00:11:23.394 "data_size": 0 00:11:23.394 }, 00:11:23.394 { 00:11:23.394 "name": "BaseBdev3", 00:11:23.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.394 "is_configured": false, 00:11:23.394 "data_offset": 0, 00:11:23.394 "data_size": 0 00:11:23.394 }, 00:11:23.394 { 00:11:23.394 "name": "BaseBdev4", 00:11:23.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.394 "is_configured": false, 00:11:23.394 "data_offset": 0, 00:11:23.394 "data_size": 0 00:11:23.394 } 00:11:23.394 ] 00:11:23.394 }' 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.394 04:00:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.964 [2024-11-18 04:00:20.342269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.964 [2024-11-18 04:00:20.342405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.964 [2024-11-18 04:00:20.350226] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.964 [2024-11-18 04:00:20.350312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.964 [2024-11-18 04:00:20.350339] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.964 [2024-11-18 04:00:20.350364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.964 [2024-11-18 04:00:20.350382] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.964 [2024-11-18 04:00:20.350403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.964 [2024-11-18 04:00:20.350420] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:23.964 [2024-11-18 04:00:20.350441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.964 [2024-11-18 04:00:20.399813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.964 BaseBdev1 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.964 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.964 [ 00:11:23.964 { 00:11:23.964 "name": "BaseBdev1", 00:11:23.964 "aliases": [ 00:11:23.964 "798b7c96-121e-46a8-827d-4e4fa631806e" 00:11:23.964 ], 00:11:23.964 "product_name": "Malloc disk", 00:11:23.964 "block_size": 512, 00:11:23.964 "num_blocks": 65536, 00:11:23.964 "uuid": "798b7c96-121e-46a8-827d-4e4fa631806e", 00:11:23.964 "assigned_rate_limits": { 00:11:23.964 "rw_ios_per_sec": 0, 00:11:23.964 "rw_mbytes_per_sec": 0, 00:11:23.964 "r_mbytes_per_sec": 0, 00:11:23.964 "w_mbytes_per_sec": 0 00:11:23.964 }, 00:11:23.964 "claimed": true, 00:11:23.964 "claim_type": "exclusive_write", 00:11:23.964 "zoned": false, 00:11:23.964 "supported_io_types": { 00:11:23.964 "read": true, 00:11:23.964 "write": true, 00:11:23.964 "unmap": true, 00:11:23.964 "flush": true, 00:11:23.964 "reset": true, 00:11:23.964 "nvme_admin": false, 00:11:23.964 "nvme_io": false, 00:11:23.964 "nvme_io_md": false, 00:11:23.964 "write_zeroes": true, 00:11:23.964 "zcopy": true, 00:11:23.964 "get_zone_info": false, 00:11:23.964 "zone_management": false, 00:11:23.965 "zone_append": false, 00:11:23.965 "compare": false, 00:11:23.965 "compare_and_write": false, 00:11:23.965 "abort": true, 00:11:23.965 "seek_hole": false, 00:11:23.965 "seek_data": false, 00:11:23.965 "copy": true, 00:11:23.965 "nvme_iov_md": false 00:11:23.965 }, 00:11:23.965 "memory_domains": [ 00:11:23.965 { 00:11:23.965 "dma_device_id": "system", 00:11:23.965 "dma_device_type": 1 00:11:23.965 }, 00:11:23.965 { 00:11:23.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.965 "dma_device_type": 2 00:11:23.965 } 00:11:23.965 ], 00:11:23.965 "driver_specific": {} 00:11:23.965 } 00:11:23.965 ] 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.965 "name": "Existed_Raid", 00:11:23.965 "uuid": "db6bb3e3-24a7-4fb5-9415-981b3b35e4b9", 00:11:23.965 "strip_size_kb": 64, 00:11:23.965 "state": "configuring", 00:11:23.965 "raid_level": "concat", 00:11:23.965 "superblock": true, 00:11:23.965 "num_base_bdevs": 4, 00:11:23.965 "num_base_bdevs_discovered": 1, 00:11:23.965 "num_base_bdevs_operational": 4, 00:11:23.965 "base_bdevs_list": [ 00:11:23.965 { 00:11:23.965 "name": "BaseBdev1", 00:11:23.965 "uuid": "798b7c96-121e-46a8-827d-4e4fa631806e", 00:11:23.965 "is_configured": true, 00:11:23.965 "data_offset": 2048, 00:11:23.965 "data_size": 63488 00:11:23.965 }, 00:11:23.965 { 00:11:23.965 "name": "BaseBdev2", 00:11:23.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.965 "is_configured": false, 00:11:23.965 "data_offset": 0, 00:11:23.965 "data_size": 0 00:11:23.965 }, 00:11:23.965 { 00:11:23.965 "name": "BaseBdev3", 00:11:23.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.965 "is_configured": false, 00:11:23.965 "data_offset": 0, 00:11:23.965 "data_size": 0 00:11:23.965 }, 00:11:23.965 { 00:11:23.965 "name": "BaseBdev4", 00:11:23.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.965 "is_configured": false, 00:11:23.965 "data_offset": 0, 00:11:23.965 "data_size": 0 00:11:23.965 } 00:11:23.965 ] 00:11:23.965 }' 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.965 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.534 [2024-11-18 04:00:20.915062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.534 [2024-11-18 04:00:20.915246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.534 [2024-11-18 04:00:20.927074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.534 [2024-11-18 04:00:20.929210] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.534 [2024-11-18 04:00:20.929257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.534 [2024-11-18 04:00:20.929268] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.534 [2024-11-18 04:00:20.929279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.534 [2024-11-18 04:00:20.929286] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:24.534 [2024-11-18 04:00:20.929294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.534 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.534 "name": "Existed_Raid", 00:11:24.534 "uuid": "db1f1848-92d7-41c3-9a1c-3a1d4c695e56", 00:11:24.534 "strip_size_kb": 64, 00:11:24.534 "state": "configuring", 00:11:24.534 "raid_level": "concat", 00:11:24.534 "superblock": true, 00:11:24.534 "num_base_bdevs": 4, 00:11:24.534 "num_base_bdevs_discovered": 1, 00:11:24.534 "num_base_bdevs_operational": 4, 00:11:24.534 "base_bdevs_list": [ 00:11:24.534 { 00:11:24.534 "name": "BaseBdev1", 00:11:24.534 "uuid": "798b7c96-121e-46a8-827d-4e4fa631806e", 00:11:24.534 "is_configured": true, 00:11:24.534 "data_offset": 2048, 00:11:24.534 "data_size": 63488 00:11:24.534 }, 00:11:24.534 { 00:11:24.534 "name": "BaseBdev2", 00:11:24.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.534 "is_configured": false, 00:11:24.534 "data_offset": 0, 00:11:24.534 "data_size": 0 00:11:24.534 }, 00:11:24.534 { 00:11:24.534 "name": "BaseBdev3", 00:11:24.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.534 "is_configured": false, 00:11:24.534 "data_offset": 0, 00:11:24.534 "data_size": 0 00:11:24.534 }, 00:11:24.534 { 00:11:24.534 "name": "BaseBdev4", 00:11:24.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.535 "is_configured": false, 00:11:24.535 "data_offset": 0, 00:11:24.535 "data_size": 0 00:11:24.535 } 00:11:24.535 ] 00:11:24.535 }' 00:11:24.535 04:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.535 04:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.794 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.794 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.794 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.055 [2024-11-18 04:00:21.434736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.055 BaseBdev2 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.055 [ 00:11:25.055 { 00:11:25.055 "name": "BaseBdev2", 00:11:25.055 "aliases": [ 00:11:25.055 "988290d7-915d-4cec-beb4-6131705a4501" 00:11:25.055 ], 00:11:25.055 "product_name": "Malloc disk", 00:11:25.055 "block_size": 512, 00:11:25.055 "num_blocks": 65536, 00:11:25.055 "uuid": "988290d7-915d-4cec-beb4-6131705a4501", 00:11:25.055 "assigned_rate_limits": { 00:11:25.055 "rw_ios_per_sec": 0, 00:11:25.055 "rw_mbytes_per_sec": 0, 00:11:25.055 "r_mbytes_per_sec": 0, 00:11:25.055 "w_mbytes_per_sec": 0 00:11:25.055 }, 00:11:25.055 "claimed": true, 00:11:25.055 "claim_type": "exclusive_write", 00:11:25.055 "zoned": false, 00:11:25.055 "supported_io_types": { 00:11:25.055 "read": true, 00:11:25.055 "write": true, 00:11:25.055 "unmap": true, 00:11:25.055 "flush": true, 00:11:25.055 "reset": true, 00:11:25.055 "nvme_admin": false, 00:11:25.055 "nvme_io": false, 00:11:25.055 "nvme_io_md": false, 00:11:25.055 "write_zeroes": true, 00:11:25.055 "zcopy": true, 00:11:25.055 "get_zone_info": false, 00:11:25.055 "zone_management": false, 00:11:25.055 "zone_append": false, 00:11:25.055 "compare": false, 00:11:25.055 "compare_and_write": false, 00:11:25.055 "abort": true, 00:11:25.055 "seek_hole": false, 00:11:25.055 "seek_data": false, 00:11:25.055 "copy": true, 00:11:25.055 "nvme_iov_md": false 00:11:25.055 }, 00:11:25.055 "memory_domains": [ 00:11:25.055 { 00:11:25.055 "dma_device_id": "system", 00:11:25.055 "dma_device_type": 1 00:11:25.055 }, 00:11:25.055 { 00:11:25.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.055 "dma_device_type": 2 00:11:25.055 } 00:11:25.055 ], 00:11:25.055 "driver_specific": {} 00:11:25.055 } 00:11:25.055 ] 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.055 "name": "Existed_Raid", 00:11:25.055 "uuid": "db1f1848-92d7-41c3-9a1c-3a1d4c695e56", 00:11:25.055 "strip_size_kb": 64, 00:11:25.055 "state": "configuring", 00:11:25.055 "raid_level": "concat", 00:11:25.055 "superblock": true, 00:11:25.055 "num_base_bdevs": 4, 00:11:25.055 "num_base_bdevs_discovered": 2, 00:11:25.055 "num_base_bdevs_operational": 4, 00:11:25.055 "base_bdevs_list": [ 00:11:25.055 { 00:11:25.055 "name": "BaseBdev1", 00:11:25.055 "uuid": "798b7c96-121e-46a8-827d-4e4fa631806e", 00:11:25.055 "is_configured": true, 00:11:25.055 "data_offset": 2048, 00:11:25.055 "data_size": 63488 00:11:25.055 }, 00:11:25.055 { 00:11:25.055 "name": "BaseBdev2", 00:11:25.055 "uuid": "988290d7-915d-4cec-beb4-6131705a4501", 00:11:25.055 "is_configured": true, 00:11:25.055 "data_offset": 2048, 00:11:25.055 "data_size": 63488 00:11:25.055 }, 00:11:25.055 { 00:11:25.055 "name": "BaseBdev3", 00:11:25.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.055 "is_configured": false, 00:11:25.055 "data_offset": 0, 00:11:25.055 "data_size": 0 00:11:25.055 }, 00:11:25.055 { 00:11:25.055 "name": "BaseBdev4", 00:11:25.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.055 "is_configured": false, 00:11:25.055 "data_offset": 0, 00:11:25.055 "data_size": 0 00:11:25.055 } 00:11:25.055 ] 00:11:25.055 }' 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.055 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.315 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:25.315 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.315 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.315 [2024-11-18 04:00:21.954504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.576 BaseBdev3 00:11:25.576 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.576 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:25.576 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:25.576 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.576 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.576 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.576 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.576 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.577 [ 00:11:25.577 { 00:11:25.577 "name": "BaseBdev3", 00:11:25.577 "aliases": [ 00:11:25.577 "5f952e94-f277-4083-9a31-a61fae831d23" 00:11:25.577 ], 00:11:25.577 "product_name": "Malloc disk", 00:11:25.577 "block_size": 512, 00:11:25.577 "num_blocks": 65536, 00:11:25.577 "uuid": "5f952e94-f277-4083-9a31-a61fae831d23", 00:11:25.577 "assigned_rate_limits": { 00:11:25.577 "rw_ios_per_sec": 0, 00:11:25.577 "rw_mbytes_per_sec": 0, 00:11:25.577 "r_mbytes_per_sec": 0, 00:11:25.577 "w_mbytes_per_sec": 0 00:11:25.577 }, 00:11:25.577 "claimed": true, 00:11:25.577 "claim_type": "exclusive_write", 00:11:25.577 "zoned": false, 00:11:25.577 "supported_io_types": { 00:11:25.577 "read": true, 00:11:25.577 "write": true, 00:11:25.577 "unmap": true, 00:11:25.577 "flush": true, 00:11:25.577 "reset": true, 00:11:25.577 "nvme_admin": false, 00:11:25.577 "nvme_io": false, 00:11:25.577 "nvme_io_md": false, 00:11:25.577 "write_zeroes": true, 00:11:25.577 "zcopy": true, 00:11:25.577 "get_zone_info": false, 00:11:25.577 "zone_management": false, 00:11:25.577 "zone_append": false, 00:11:25.577 "compare": false, 00:11:25.577 "compare_and_write": false, 00:11:25.577 "abort": true, 00:11:25.577 "seek_hole": false, 00:11:25.577 "seek_data": false, 00:11:25.577 "copy": true, 00:11:25.577 "nvme_iov_md": false 00:11:25.577 }, 00:11:25.577 "memory_domains": [ 00:11:25.577 { 00:11:25.577 "dma_device_id": "system", 00:11:25.577 "dma_device_type": 1 00:11:25.577 }, 00:11:25.577 { 00:11:25.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.577 "dma_device_type": 2 00:11:25.577 } 00:11:25.577 ], 00:11:25.577 "driver_specific": {} 00:11:25.577 } 00:11:25.577 ] 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.577 04:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.577 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.577 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.577 "name": "Existed_Raid", 00:11:25.577 "uuid": "db1f1848-92d7-41c3-9a1c-3a1d4c695e56", 00:11:25.577 "strip_size_kb": 64, 00:11:25.577 "state": "configuring", 00:11:25.577 "raid_level": "concat", 00:11:25.577 "superblock": true, 00:11:25.577 "num_base_bdevs": 4, 00:11:25.577 "num_base_bdevs_discovered": 3, 00:11:25.577 "num_base_bdevs_operational": 4, 00:11:25.577 "base_bdevs_list": [ 00:11:25.577 { 00:11:25.577 "name": "BaseBdev1", 00:11:25.577 "uuid": "798b7c96-121e-46a8-827d-4e4fa631806e", 00:11:25.577 "is_configured": true, 00:11:25.577 "data_offset": 2048, 00:11:25.577 "data_size": 63488 00:11:25.577 }, 00:11:25.577 { 00:11:25.577 "name": "BaseBdev2", 00:11:25.577 "uuid": "988290d7-915d-4cec-beb4-6131705a4501", 00:11:25.577 "is_configured": true, 00:11:25.577 "data_offset": 2048, 00:11:25.577 "data_size": 63488 00:11:25.577 }, 00:11:25.577 { 00:11:25.577 "name": "BaseBdev3", 00:11:25.577 "uuid": "5f952e94-f277-4083-9a31-a61fae831d23", 00:11:25.577 "is_configured": true, 00:11:25.577 "data_offset": 2048, 00:11:25.577 "data_size": 63488 00:11:25.577 }, 00:11:25.577 { 00:11:25.577 "name": "BaseBdev4", 00:11:25.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.577 "is_configured": false, 00:11:25.577 "data_offset": 0, 00:11:25.577 "data_size": 0 00:11:25.577 } 00:11:25.577 ] 00:11:25.577 }' 00:11:25.577 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.577 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.837 [2024-11-18 04:00:22.449301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:25.837 [2024-11-18 04:00:22.449688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:25.837 [2024-11-18 04:00:22.449738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:25.837 [2024-11-18 04:00:22.450068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:25.837 BaseBdev4 00:11:25.837 [2024-11-18 04:00:22.450266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:25.837 [2024-11-18 04:00:22.450321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:25.837 [2024-11-18 04:00:22.450498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.837 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.837 [ 00:11:25.837 { 00:11:25.837 "name": "BaseBdev4", 00:11:25.837 "aliases": [ 00:11:25.837 "63b82cbb-037f-40e4-9741-631c7051e79f" 00:11:25.837 ], 00:11:25.837 "product_name": "Malloc disk", 00:11:25.837 "block_size": 512, 00:11:25.837 "num_blocks": 65536, 00:11:25.837 "uuid": "63b82cbb-037f-40e4-9741-631c7051e79f", 00:11:25.837 "assigned_rate_limits": { 00:11:25.837 "rw_ios_per_sec": 0, 00:11:25.837 "rw_mbytes_per_sec": 0, 00:11:25.837 "r_mbytes_per_sec": 0, 00:11:26.104 "w_mbytes_per_sec": 0 00:11:26.104 }, 00:11:26.104 "claimed": true, 00:11:26.104 "claim_type": "exclusive_write", 00:11:26.104 "zoned": false, 00:11:26.104 "supported_io_types": { 00:11:26.104 "read": true, 00:11:26.104 "write": true, 00:11:26.104 "unmap": true, 00:11:26.104 "flush": true, 00:11:26.104 "reset": true, 00:11:26.104 "nvme_admin": false, 00:11:26.104 "nvme_io": false, 00:11:26.104 "nvme_io_md": false, 00:11:26.104 "write_zeroes": true, 00:11:26.104 "zcopy": true, 00:11:26.104 "get_zone_info": false, 00:11:26.104 "zone_management": false, 00:11:26.104 "zone_append": false, 00:11:26.104 "compare": false, 00:11:26.104 "compare_and_write": false, 00:11:26.104 "abort": true, 00:11:26.104 "seek_hole": false, 00:11:26.104 "seek_data": false, 00:11:26.104 "copy": true, 00:11:26.104 "nvme_iov_md": false 00:11:26.104 }, 00:11:26.104 "memory_domains": [ 00:11:26.104 { 00:11:26.104 "dma_device_id": "system", 00:11:26.104 "dma_device_type": 1 00:11:26.104 }, 00:11:26.104 { 00:11:26.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.104 "dma_device_type": 2 00:11:26.104 } 00:11:26.104 ], 00:11:26.104 "driver_specific": {} 00:11:26.104 } 00:11:26.104 ] 00:11:26.104 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.104 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.104 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.104 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.104 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:26.104 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.104 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.104 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.104 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.104 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.104 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.105 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.105 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.105 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.105 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.105 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.105 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.105 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.105 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.105 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.105 "name": "Existed_Raid", 00:11:26.105 "uuid": "db1f1848-92d7-41c3-9a1c-3a1d4c695e56", 00:11:26.105 "strip_size_kb": 64, 00:11:26.105 "state": "online", 00:11:26.105 "raid_level": "concat", 00:11:26.105 "superblock": true, 00:11:26.105 "num_base_bdevs": 4, 00:11:26.105 "num_base_bdevs_discovered": 4, 00:11:26.105 "num_base_bdevs_operational": 4, 00:11:26.105 "base_bdevs_list": [ 00:11:26.105 { 00:11:26.105 "name": "BaseBdev1", 00:11:26.105 "uuid": "798b7c96-121e-46a8-827d-4e4fa631806e", 00:11:26.105 "is_configured": true, 00:11:26.105 "data_offset": 2048, 00:11:26.105 "data_size": 63488 00:11:26.105 }, 00:11:26.105 { 00:11:26.105 "name": "BaseBdev2", 00:11:26.105 "uuid": "988290d7-915d-4cec-beb4-6131705a4501", 00:11:26.105 "is_configured": true, 00:11:26.105 "data_offset": 2048, 00:11:26.105 "data_size": 63488 00:11:26.105 }, 00:11:26.105 { 00:11:26.105 "name": "BaseBdev3", 00:11:26.105 "uuid": "5f952e94-f277-4083-9a31-a61fae831d23", 00:11:26.105 "is_configured": true, 00:11:26.105 "data_offset": 2048, 00:11:26.105 "data_size": 63488 00:11:26.105 }, 00:11:26.105 { 00:11:26.105 "name": "BaseBdev4", 00:11:26.105 "uuid": "63b82cbb-037f-40e4-9741-631c7051e79f", 00:11:26.105 "is_configured": true, 00:11:26.105 "data_offset": 2048, 00:11:26.105 "data_size": 63488 00:11:26.105 } 00:11:26.105 ] 00:11:26.105 }' 00:11:26.105 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.105 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.381 [2024-11-18 04:00:22.945024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.381 "name": "Existed_Raid", 00:11:26.381 "aliases": [ 00:11:26.381 "db1f1848-92d7-41c3-9a1c-3a1d4c695e56" 00:11:26.381 ], 00:11:26.381 "product_name": "Raid Volume", 00:11:26.381 "block_size": 512, 00:11:26.381 "num_blocks": 253952, 00:11:26.381 "uuid": "db1f1848-92d7-41c3-9a1c-3a1d4c695e56", 00:11:26.381 "assigned_rate_limits": { 00:11:26.381 "rw_ios_per_sec": 0, 00:11:26.381 "rw_mbytes_per_sec": 0, 00:11:26.381 "r_mbytes_per_sec": 0, 00:11:26.381 "w_mbytes_per_sec": 0 00:11:26.381 }, 00:11:26.381 "claimed": false, 00:11:26.381 "zoned": false, 00:11:26.381 "supported_io_types": { 00:11:26.381 "read": true, 00:11:26.381 "write": true, 00:11:26.381 "unmap": true, 00:11:26.381 "flush": true, 00:11:26.381 "reset": true, 00:11:26.381 "nvme_admin": false, 00:11:26.381 "nvme_io": false, 00:11:26.381 "nvme_io_md": false, 00:11:26.381 "write_zeroes": true, 00:11:26.381 "zcopy": false, 00:11:26.381 "get_zone_info": false, 00:11:26.381 "zone_management": false, 00:11:26.381 "zone_append": false, 00:11:26.381 "compare": false, 00:11:26.381 "compare_and_write": false, 00:11:26.381 "abort": false, 00:11:26.381 "seek_hole": false, 00:11:26.381 "seek_data": false, 00:11:26.381 "copy": false, 00:11:26.381 "nvme_iov_md": false 00:11:26.381 }, 00:11:26.381 "memory_domains": [ 00:11:26.381 { 00:11:26.381 "dma_device_id": "system", 00:11:26.381 "dma_device_type": 1 00:11:26.381 }, 00:11:26.381 { 00:11:26.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.381 "dma_device_type": 2 00:11:26.381 }, 00:11:26.381 { 00:11:26.381 "dma_device_id": "system", 00:11:26.381 "dma_device_type": 1 00:11:26.381 }, 00:11:26.381 { 00:11:26.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.381 "dma_device_type": 2 00:11:26.381 }, 00:11:26.381 { 00:11:26.381 "dma_device_id": "system", 00:11:26.381 "dma_device_type": 1 00:11:26.381 }, 00:11:26.381 { 00:11:26.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.381 "dma_device_type": 2 00:11:26.381 }, 00:11:26.381 { 00:11:26.381 "dma_device_id": "system", 00:11:26.381 "dma_device_type": 1 00:11:26.381 }, 00:11:26.381 { 00:11:26.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.381 "dma_device_type": 2 00:11:26.381 } 00:11:26.381 ], 00:11:26.381 "driver_specific": { 00:11:26.381 "raid": { 00:11:26.381 "uuid": "db1f1848-92d7-41c3-9a1c-3a1d4c695e56", 00:11:26.381 "strip_size_kb": 64, 00:11:26.381 "state": "online", 00:11:26.381 "raid_level": "concat", 00:11:26.381 "superblock": true, 00:11:26.381 "num_base_bdevs": 4, 00:11:26.381 "num_base_bdevs_discovered": 4, 00:11:26.381 "num_base_bdevs_operational": 4, 00:11:26.381 "base_bdevs_list": [ 00:11:26.381 { 00:11:26.381 "name": "BaseBdev1", 00:11:26.381 "uuid": "798b7c96-121e-46a8-827d-4e4fa631806e", 00:11:26.381 "is_configured": true, 00:11:26.381 "data_offset": 2048, 00:11:26.381 "data_size": 63488 00:11:26.381 }, 00:11:26.381 { 00:11:26.381 "name": "BaseBdev2", 00:11:26.381 "uuid": "988290d7-915d-4cec-beb4-6131705a4501", 00:11:26.381 "is_configured": true, 00:11:26.381 "data_offset": 2048, 00:11:26.381 "data_size": 63488 00:11:26.381 }, 00:11:26.381 { 00:11:26.381 "name": "BaseBdev3", 00:11:26.381 "uuid": "5f952e94-f277-4083-9a31-a61fae831d23", 00:11:26.381 "is_configured": true, 00:11:26.381 "data_offset": 2048, 00:11:26.381 "data_size": 63488 00:11:26.381 }, 00:11:26.381 { 00:11:26.381 "name": "BaseBdev4", 00:11:26.381 "uuid": "63b82cbb-037f-40e4-9741-631c7051e79f", 00:11:26.381 "is_configured": true, 00:11:26.381 "data_offset": 2048, 00:11:26.381 "data_size": 63488 00:11:26.381 } 00:11:26.381 ] 00:11:26.381 } 00:11:26.381 } 00:11:26.381 }' 00:11:26.381 04:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:26.642 BaseBdev2 00:11:26.642 BaseBdev3 00:11:26.642 BaseBdev4' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.642 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.902 [2024-11-18 04:00:23.284093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.902 [2024-11-18 04:00:23.284142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.902 [2024-11-18 04:00:23.284201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.902 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.902 "name": "Existed_Raid", 00:11:26.902 "uuid": "db1f1848-92d7-41c3-9a1c-3a1d4c695e56", 00:11:26.902 "strip_size_kb": 64, 00:11:26.902 "state": "offline", 00:11:26.902 "raid_level": "concat", 00:11:26.902 "superblock": true, 00:11:26.902 "num_base_bdevs": 4, 00:11:26.902 "num_base_bdevs_discovered": 3, 00:11:26.902 "num_base_bdevs_operational": 3, 00:11:26.902 "base_bdevs_list": [ 00:11:26.902 { 00:11:26.902 "name": null, 00:11:26.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.902 "is_configured": false, 00:11:26.902 "data_offset": 0, 00:11:26.902 "data_size": 63488 00:11:26.902 }, 00:11:26.902 { 00:11:26.902 "name": "BaseBdev2", 00:11:26.902 "uuid": "988290d7-915d-4cec-beb4-6131705a4501", 00:11:26.902 "is_configured": true, 00:11:26.902 "data_offset": 2048, 00:11:26.902 "data_size": 63488 00:11:26.902 }, 00:11:26.902 { 00:11:26.902 "name": "BaseBdev3", 00:11:26.902 "uuid": "5f952e94-f277-4083-9a31-a61fae831d23", 00:11:26.902 "is_configured": true, 00:11:26.902 "data_offset": 2048, 00:11:26.903 "data_size": 63488 00:11:26.903 }, 00:11:26.903 { 00:11:26.903 "name": "BaseBdev4", 00:11:26.903 "uuid": "63b82cbb-037f-40e4-9741-631c7051e79f", 00:11:26.903 "is_configured": true, 00:11:26.903 "data_offset": 2048, 00:11:26.903 "data_size": 63488 00:11:26.903 } 00:11:26.903 ] 00:11:26.903 }' 00:11:26.903 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.903 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.473 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:27.473 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.473 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.473 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.474 [2024-11-18 04:00:23.895946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.474 04:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.474 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.474 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.474 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.474 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.474 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.474 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.474 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:27.474 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.474 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.474 [2024-11-18 04:00:24.056028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.733 [2024-11-18 04:00:24.213597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:27.733 [2024-11-18 04:00:24.213663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:27.733 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.734 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.994 BaseBdev2 00:11:27.994 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.994 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.995 [ 00:11:27.995 { 00:11:27.995 "name": "BaseBdev2", 00:11:27.995 "aliases": [ 00:11:27.995 "d09f1dc5-be5d-4847-b11a-41143d5693ad" 00:11:27.995 ], 00:11:27.995 "product_name": "Malloc disk", 00:11:27.995 "block_size": 512, 00:11:27.995 "num_blocks": 65536, 00:11:27.995 "uuid": "d09f1dc5-be5d-4847-b11a-41143d5693ad", 00:11:27.995 "assigned_rate_limits": { 00:11:27.995 "rw_ios_per_sec": 0, 00:11:27.995 "rw_mbytes_per_sec": 0, 00:11:27.995 "r_mbytes_per_sec": 0, 00:11:27.995 "w_mbytes_per_sec": 0 00:11:27.995 }, 00:11:27.995 "claimed": false, 00:11:27.995 "zoned": false, 00:11:27.995 "supported_io_types": { 00:11:27.995 "read": true, 00:11:27.995 "write": true, 00:11:27.995 "unmap": true, 00:11:27.995 "flush": true, 00:11:27.995 "reset": true, 00:11:27.995 "nvme_admin": false, 00:11:27.995 "nvme_io": false, 00:11:27.995 "nvme_io_md": false, 00:11:27.995 "write_zeroes": true, 00:11:27.995 "zcopy": true, 00:11:27.995 "get_zone_info": false, 00:11:27.995 "zone_management": false, 00:11:27.995 "zone_append": false, 00:11:27.995 "compare": false, 00:11:27.995 "compare_and_write": false, 00:11:27.995 "abort": true, 00:11:27.995 "seek_hole": false, 00:11:27.995 "seek_data": false, 00:11:27.995 "copy": true, 00:11:27.995 "nvme_iov_md": false 00:11:27.995 }, 00:11:27.995 "memory_domains": [ 00:11:27.995 { 00:11:27.995 "dma_device_id": "system", 00:11:27.995 "dma_device_type": 1 00:11:27.995 }, 00:11:27.995 { 00:11:27.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.995 "dma_device_type": 2 00:11:27.995 } 00:11:27.995 ], 00:11:27.995 "driver_specific": {} 00:11:27.995 } 00:11:27.995 ] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.995 BaseBdev3 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.995 [ 00:11:27.995 { 00:11:27.995 "name": "BaseBdev3", 00:11:27.995 "aliases": [ 00:11:27.995 "d644bd6d-379d-49f3-a9ea-de2784f54ec0" 00:11:27.995 ], 00:11:27.995 "product_name": "Malloc disk", 00:11:27.995 "block_size": 512, 00:11:27.995 "num_blocks": 65536, 00:11:27.995 "uuid": "d644bd6d-379d-49f3-a9ea-de2784f54ec0", 00:11:27.995 "assigned_rate_limits": { 00:11:27.995 "rw_ios_per_sec": 0, 00:11:27.995 "rw_mbytes_per_sec": 0, 00:11:27.995 "r_mbytes_per_sec": 0, 00:11:27.995 "w_mbytes_per_sec": 0 00:11:27.995 }, 00:11:27.995 "claimed": false, 00:11:27.995 "zoned": false, 00:11:27.995 "supported_io_types": { 00:11:27.995 "read": true, 00:11:27.995 "write": true, 00:11:27.995 "unmap": true, 00:11:27.995 "flush": true, 00:11:27.995 "reset": true, 00:11:27.995 "nvme_admin": false, 00:11:27.995 "nvme_io": false, 00:11:27.995 "nvme_io_md": false, 00:11:27.995 "write_zeroes": true, 00:11:27.995 "zcopy": true, 00:11:27.995 "get_zone_info": false, 00:11:27.995 "zone_management": false, 00:11:27.995 "zone_append": false, 00:11:27.995 "compare": false, 00:11:27.995 "compare_and_write": false, 00:11:27.995 "abort": true, 00:11:27.995 "seek_hole": false, 00:11:27.995 "seek_data": false, 00:11:27.995 "copy": true, 00:11:27.995 "nvme_iov_md": false 00:11:27.995 }, 00:11:27.995 "memory_domains": [ 00:11:27.995 { 00:11:27.995 "dma_device_id": "system", 00:11:27.995 "dma_device_type": 1 00:11:27.995 }, 00:11:27.995 { 00:11:27.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.995 "dma_device_type": 2 00:11:27.995 } 00:11:27.995 ], 00:11:27.995 "driver_specific": {} 00:11:27.995 } 00:11:27.995 ] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.995 BaseBdev4 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.995 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.995 [ 00:11:27.995 { 00:11:27.995 "name": "BaseBdev4", 00:11:27.995 "aliases": [ 00:11:27.995 "175cbcd0-5dd1-4ef2-9c07-4ab597411e4d" 00:11:27.995 ], 00:11:27.995 "product_name": "Malloc disk", 00:11:27.995 "block_size": 512, 00:11:27.995 "num_blocks": 65536, 00:11:27.995 "uuid": "175cbcd0-5dd1-4ef2-9c07-4ab597411e4d", 00:11:27.995 "assigned_rate_limits": { 00:11:27.995 "rw_ios_per_sec": 0, 00:11:27.995 "rw_mbytes_per_sec": 0, 00:11:27.995 "r_mbytes_per_sec": 0, 00:11:27.995 "w_mbytes_per_sec": 0 00:11:27.995 }, 00:11:27.995 "claimed": false, 00:11:27.995 "zoned": false, 00:11:27.995 "supported_io_types": { 00:11:27.995 "read": true, 00:11:27.995 "write": true, 00:11:27.995 "unmap": true, 00:11:27.995 "flush": true, 00:11:27.995 "reset": true, 00:11:27.995 "nvme_admin": false, 00:11:27.995 "nvme_io": false, 00:11:27.995 "nvme_io_md": false, 00:11:27.995 "write_zeroes": true, 00:11:27.995 "zcopy": true, 00:11:27.995 "get_zone_info": false, 00:11:27.995 "zone_management": false, 00:11:27.995 "zone_append": false, 00:11:27.995 "compare": false, 00:11:27.995 "compare_and_write": false, 00:11:27.995 "abort": true, 00:11:27.996 "seek_hole": false, 00:11:27.996 "seek_data": false, 00:11:27.996 "copy": true, 00:11:27.996 "nvme_iov_md": false 00:11:27.996 }, 00:11:27.996 "memory_domains": [ 00:11:27.996 { 00:11:27.996 "dma_device_id": "system", 00:11:27.996 "dma_device_type": 1 00:11:27.996 }, 00:11:27.996 { 00:11:27.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.996 "dma_device_type": 2 00:11:27.996 } 00:11:27.996 ], 00:11:27.996 "driver_specific": {} 00:11:27.996 } 00:11:27.996 ] 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.996 [2024-11-18 04:00:24.618033] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.996 [2024-11-18 04:00:24.618080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.996 [2024-11-18 04:00:24.618103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.996 [2024-11-18 04:00:24.620171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.996 [2024-11-18 04:00:24.620226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.996 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.256 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.256 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.256 "name": "Existed_Raid", 00:11:28.256 "uuid": "c6b2a5b7-d014-4c73-9d67-cf9c7831a8a2", 00:11:28.256 "strip_size_kb": 64, 00:11:28.256 "state": "configuring", 00:11:28.256 "raid_level": "concat", 00:11:28.256 "superblock": true, 00:11:28.256 "num_base_bdevs": 4, 00:11:28.256 "num_base_bdevs_discovered": 3, 00:11:28.256 "num_base_bdevs_operational": 4, 00:11:28.256 "base_bdevs_list": [ 00:11:28.256 { 00:11:28.256 "name": "BaseBdev1", 00:11:28.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.256 "is_configured": false, 00:11:28.256 "data_offset": 0, 00:11:28.256 "data_size": 0 00:11:28.256 }, 00:11:28.256 { 00:11:28.256 "name": "BaseBdev2", 00:11:28.256 "uuid": "d09f1dc5-be5d-4847-b11a-41143d5693ad", 00:11:28.256 "is_configured": true, 00:11:28.256 "data_offset": 2048, 00:11:28.256 "data_size": 63488 00:11:28.256 }, 00:11:28.256 { 00:11:28.256 "name": "BaseBdev3", 00:11:28.256 "uuid": "d644bd6d-379d-49f3-a9ea-de2784f54ec0", 00:11:28.256 "is_configured": true, 00:11:28.256 "data_offset": 2048, 00:11:28.256 "data_size": 63488 00:11:28.256 }, 00:11:28.256 { 00:11:28.256 "name": "BaseBdev4", 00:11:28.256 "uuid": "175cbcd0-5dd1-4ef2-9c07-4ab597411e4d", 00:11:28.256 "is_configured": true, 00:11:28.256 "data_offset": 2048, 00:11:28.256 "data_size": 63488 00:11:28.256 } 00:11:28.256 ] 00:11:28.256 }' 00:11:28.256 04:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.256 04:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.516 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:28.516 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.516 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.516 [2024-11-18 04:00:25.061220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.516 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.516 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.516 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.516 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.516 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.516 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.517 "name": "Existed_Raid", 00:11:28.517 "uuid": "c6b2a5b7-d014-4c73-9d67-cf9c7831a8a2", 00:11:28.517 "strip_size_kb": 64, 00:11:28.517 "state": "configuring", 00:11:28.517 "raid_level": "concat", 00:11:28.517 "superblock": true, 00:11:28.517 "num_base_bdevs": 4, 00:11:28.517 "num_base_bdevs_discovered": 2, 00:11:28.517 "num_base_bdevs_operational": 4, 00:11:28.517 "base_bdevs_list": [ 00:11:28.517 { 00:11:28.517 "name": "BaseBdev1", 00:11:28.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.517 "is_configured": false, 00:11:28.517 "data_offset": 0, 00:11:28.517 "data_size": 0 00:11:28.517 }, 00:11:28.517 { 00:11:28.517 "name": null, 00:11:28.517 "uuid": "d09f1dc5-be5d-4847-b11a-41143d5693ad", 00:11:28.517 "is_configured": false, 00:11:28.517 "data_offset": 0, 00:11:28.517 "data_size": 63488 00:11:28.517 }, 00:11:28.517 { 00:11:28.517 "name": "BaseBdev3", 00:11:28.517 "uuid": "d644bd6d-379d-49f3-a9ea-de2784f54ec0", 00:11:28.517 "is_configured": true, 00:11:28.517 "data_offset": 2048, 00:11:28.517 "data_size": 63488 00:11:28.517 }, 00:11:28.517 { 00:11:28.517 "name": "BaseBdev4", 00:11:28.517 "uuid": "175cbcd0-5dd1-4ef2-9c07-4ab597411e4d", 00:11:28.517 "is_configured": true, 00:11:28.517 "data_offset": 2048, 00:11:28.517 "data_size": 63488 00:11:28.517 } 00:11:28.517 ] 00:11:28.517 }' 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.517 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.087 [2024-11-18 04:00:25.626574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.087 BaseBdev1 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.087 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.088 [ 00:11:29.088 { 00:11:29.088 "name": "BaseBdev1", 00:11:29.088 "aliases": [ 00:11:29.088 "9b3f6b06-1956-4fbb-82ee-bef13ed17313" 00:11:29.088 ], 00:11:29.088 "product_name": "Malloc disk", 00:11:29.088 "block_size": 512, 00:11:29.088 "num_blocks": 65536, 00:11:29.088 "uuid": "9b3f6b06-1956-4fbb-82ee-bef13ed17313", 00:11:29.088 "assigned_rate_limits": { 00:11:29.088 "rw_ios_per_sec": 0, 00:11:29.088 "rw_mbytes_per_sec": 0, 00:11:29.088 "r_mbytes_per_sec": 0, 00:11:29.088 "w_mbytes_per_sec": 0 00:11:29.088 }, 00:11:29.088 "claimed": true, 00:11:29.088 "claim_type": "exclusive_write", 00:11:29.088 "zoned": false, 00:11:29.088 "supported_io_types": { 00:11:29.088 "read": true, 00:11:29.088 "write": true, 00:11:29.088 "unmap": true, 00:11:29.088 "flush": true, 00:11:29.088 "reset": true, 00:11:29.088 "nvme_admin": false, 00:11:29.088 "nvme_io": false, 00:11:29.088 "nvme_io_md": false, 00:11:29.088 "write_zeroes": true, 00:11:29.088 "zcopy": true, 00:11:29.088 "get_zone_info": false, 00:11:29.088 "zone_management": false, 00:11:29.088 "zone_append": false, 00:11:29.088 "compare": false, 00:11:29.088 "compare_and_write": false, 00:11:29.088 "abort": true, 00:11:29.088 "seek_hole": false, 00:11:29.088 "seek_data": false, 00:11:29.088 "copy": true, 00:11:29.088 "nvme_iov_md": false 00:11:29.088 }, 00:11:29.088 "memory_domains": [ 00:11:29.088 { 00:11:29.088 "dma_device_id": "system", 00:11:29.088 "dma_device_type": 1 00:11:29.088 }, 00:11:29.088 { 00:11:29.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.088 "dma_device_type": 2 00:11:29.088 } 00:11:29.088 ], 00:11:29.088 "driver_specific": {} 00:11:29.088 } 00:11:29.088 ] 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.088 "name": "Existed_Raid", 00:11:29.088 "uuid": "c6b2a5b7-d014-4c73-9d67-cf9c7831a8a2", 00:11:29.088 "strip_size_kb": 64, 00:11:29.088 "state": "configuring", 00:11:29.088 "raid_level": "concat", 00:11:29.088 "superblock": true, 00:11:29.088 "num_base_bdevs": 4, 00:11:29.088 "num_base_bdevs_discovered": 3, 00:11:29.088 "num_base_bdevs_operational": 4, 00:11:29.088 "base_bdevs_list": [ 00:11:29.088 { 00:11:29.088 "name": "BaseBdev1", 00:11:29.088 "uuid": "9b3f6b06-1956-4fbb-82ee-bef13ed17313", 00:11:29.088 "is_configured": true, 00:11:29.088 "data_offset": 2048, 00:11:29.088 "data_size": 63488 00:11:29.088 }, 00:11:29.088 { 00:11:29.088 "name": null, 00:11:29.088 "uuid": "d09f1dc5-be5d-4847-b11a-41143d5693ad", 00:11:29.088 "is_configured": false, 00:11:29.088 "data_offset": 0, 00:11:29.088 "data_size": 63488 00:11:29.088 }, 00:11:29.088 { 00:11:29.088 "name": "BaseBdev3", 00:11:29.088 "uuid": "d644bd6d-379d-49f3-a9ea-de2784f54ec0", 00:11:29.088 "is_configured": true, 00:11:29.088 "data_offset": 2048, 00:11:29.088 "data_size": 63488 00:11:29.088 }, 00:11:29.088 { 00:11:29.088 "name": "BaseBdev4", 00:11:29.088 "uuid": "175cbcd0-5dd1-4ef2-9c07-4ab597411e4d", 00:11:29.088 "is_configured": true, 00:11:29.088 "data_offset": 2048, 00:11:29.088 "data_size": 63488 00:11:29.088 } 00:11:29.088 ] 00:11:29.088 }' 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.088 04:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.658 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.658 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.659 [2024-11-18 04:00:26.125772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.659 "name": "Existed_Raid", 00:11:29.659 "uuid": "c6b2a5b7-d014-4c73-9d67-cf9c7831a8a2", 00:11:29.659 "strip_size_kb": 64, 00:11:29.659 "state": "configuring", 00:11:29.659 "raid_level": "concat", 00:11:29.659 "superblock": true, 00:11:29.659 "num_base_bdevs": 4, 00:11:29.659 "num_base_bdevs_discovered": 2, 00:11:29.659 "num_base_bdevs_operational": 4, 00:11:29.659 "base_bdevs_list": [ 00:11:29.659 { 00:11:29.659 "name": "BaseBdev1", 00:11:29.659 "uuid": "9b3f6b06-1956-4fbb-82ee-bef13ed17313", 00:11:29.659 "is_configured": true, 00:11:29.659 "data_offset": 2048, 00:11:29.659 "data_size": 63488 00:11:29.659 }, 00:11:29.659 { 00:11:29.659 "name": null, 00:11:29.659 "uuid": "d09f1dc5-be5d-4847-b11a-41143d5693ad", 00:11:29.659 "is_configured": false, 00:11:29.659 "data_offset": 0, 00:11:29.659 "data_size": 63488 00:11:29.659 }, 00:11:29.659 { 00:11:29.659 "name": null, 00:11:29.659 "uuid": "d644bd6d-379d-49f3-a9ea-de2784f54ec0", 00:11:29.659 "is_configured": false, 00:11:29.659 "data_offset": 0, 00:11:29.659 "data_size": 63488 00:11:29.659 }, 00:11:29.659 { 00:11:29.659 "name": "BaseBdev4", 00:11:29.659 "uuid": "175cbcd0-5dd1-4ef2-9c07-4ab597411e4d", 00:11:29.659 "is_configured": true, 00:11:29.659 "data_offset": 2048, 00:11:29.659 "data_size": 63488 00:11:29.659 } 00:11:29.659 ] 00:11:29.659 }' 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.659 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.228 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.228 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.228 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.228 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.228 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.228 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:30.228 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:30.228 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.228 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.228 [2024-11-18 04:00:26.612991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.228 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.229 "name": "Existed_Raid", 00:11:30.229 "uuid": "c6b2a5b7-d014-4c73-9d67-cf9c7831a8a2", 00:11:30.229 "strip_size_kb": 64, 00:11:30.229 "state": "configuring", 00:11:30.229 "raid_level": "concat", 00:11:30.229 "superblock": true, 00:11:30.229 "num_base_bdevs": 4, 00:11:30.229 "num_base_bdevs_discovered": 3, 00:11:30.229 "num_base_bdevs_operational": 4, 00:11:30.229 "base_bdevs_list": [ 00:11:30.229 { 00:11:30.229 "name": "BaseBdev1", 00:11:30.229 "uuid": "9b3f6b06-1956-4fbb-82ee-bef13ed17313", 00:11:30.229 "is_configured": true, 00:11:30.229 "data_offset": 2048, 00:11:30.229 "data_size": 63488 00:11:30.229 }, 00:11:30.229 { 00:11:30.229 "name": null, 00:11:30.229 "uuid": "d09f1dc5-be5d-4847-b11a-41143d5693ad", 00:11:30.229 "is_configured": false, 00:11:30.229 "data_offset": 0, 00:11:30.229 "data_size": 63488 00:11:30.229 }, 00:11:30.229 { 00:11:30.229 "name": "BaseBdev3", 00:11:30.229 "uuid": "d644bd6d-379d-49f3-a9ea-de2784f54ec0", 00:11:30.229 "is_configured": true, 00:11:30.229 "data_offset": 2048, 00:11:30.229 "data_size": 63488 00:11:30.229 }, 00:11:30.229 { 00:11:30.229 "name": "BaseBdev4", 00:11:30.229 "uuid": "175cbcd0-5dd1-4ef2-9c07-4ab597411e4d", 00:11:30.229 "is_configured": true, 00:11:30.229 "data_offset": 2048, 00:11:30.229 "data_size": 63488 00:11:30.229 } 00:11:30.229 ] 00:11:30.229 }' 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.229 04:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.489 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.489 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.489 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.489 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.489 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.489 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:30.489 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.489 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.489 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.489 [2024-11-18 04:00:27.044222] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.749 "name": "Existed_Raid", 00:11:30.749 "uuid": "c6b2a5b7-d014-4c73-9d67-cf9c7831a8a2", 00:11:30.749 "strip_size_kb": 64, 00:11:30.749 "state": "configuring", 00:11:30.749 "raid_level": "concat", 00:11:30.749 "superblock": true, 00:11:30.749 "num_base_bdevs": 4, 00:11:30.749 "num_base_bdevs_discovered": 2, 00:11:30.749 "num_base_bdevs_operational": 4, 00:11:30.749 "base_bdevs_list": [ 00:11:30.749 { 00:11:30.749 "name": null, 00:11:30.749 "uuid": "9b3f6b06-1956-4fbb-82ee-bef13ed17313", 00:11:30.749 "is_configured": false, 00:11:30.749 "data_offset": 0, 00:11:30.749 "data_size": 63488 00:11:30.749 }, 00:11:30.749 { 00:11:30.749 "name": null, 00:11:30.749 "uuid": "d09f1dc5-be5d-4847-b11a-41143d5693ad", 00:11:30.749 "is_configured": false, 00:11:30.749 "data_offset": 0, 00:11:30.749 "data_size": 63488 00:11:30.749 }, 00:11:30.749 { 00:11:30.749 "name": "BaseBdev3", 00:11:30.749 "uuid": "d644bd6d-379d-49f3-a9ea-de2784f54ec0", 00:11:30.749 "is_configured": true, 00:11:30.749 "data_offset": 2048, 00:11:30.749 "data_size": 63488 00:11:30.749 }, 00:11:30.749 { 00:11:30.749 "name": "BaseBdev4", 00:11:30.749 "uuid": "175cbcd0-5dd1-4ef2-9c07-4ab597411e4d", 00:11:30.749 "is_configured": true, 00:11:30.749 "data_offset": 2048, 00:11:30.749 "data_size": 63488 00:11:30.749 } 00:11:30.749 ] 00:11:30.749 }' 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.749 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.008 [2024-11-18 04:00:27.629041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.008 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.268 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.268 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.268 "name": "Existed_Raid", 00:11:31.268 "uuid": "c6b2a5b7-d014-4c73-9d67-cf9c7831a8a2", 00:11:31.268 "strip_size_kb": 64, 00:11:31.268 "state": "configuring", 00:11:31.268 "raid_level": "concat", 00:11:31.268 "superblock": true, 00:11:31.268 "num_base_bdevs": 4, 00:11:31.268 "num_base_bdevs_discovered": 3, 00:11:31.268 "num_base_bdevs_operational": 4, 00:11:31.268 "base_bdevs_list": [ 00:11:31.268 { 00:11:31.268 "name": null, 00:11:31.268 "uuid": "9b3f6b06-1956-4fbb-82ee-bef13ed17313", 00:11:31.268 "is_configured": false, 00:11:31.268 "data_offset": 0, 00:11:31.268 "data_size": 63488 00:11:31.268 }, 00:11:31.268 { 00:11:31.268 "name": "BaseBdev2", 00:11:31.268 "uuid": "d09f1dc5-be5d-4847-b11a-41143d5693ad", 00:11:31.268 "is_configured": true, 00:11:31.268 "data_offset": 2048, 00:11:31.268 "data_size": 63488 00:11:31.268 }, 00:11:31.268 { 00:11:31.268 "name": "BaseBdev3", 00:11:31.268 "uuid": "d644bd6d-379d-49f3-a9ea-de2784f54ec0", 00:11:31.268 "is_configured": true, 00:11:31.268 "data_offset": 2048, 00:11:31.268 "data_size": 63488 00:11:31.268 }, 00:11:31.268 { 00:11:31.268 "name": "BaseBdev4", 00:11:31.268 "uuid": "175cbcd0-5dd1-4ef2-9c07-4ab597411e4d", 00:11:31.268 "is_configured": true, 00:11:31.268 "data_offset": 2048, 00:11:31.268 "data_size": 63488 00:11:31.268 } 00:11:31.268 ] 00:11:31.268 }' 00:11:31.268 04:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.268 04:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.528 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.528 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.528 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.528 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:31.528 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.528 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:31.528 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.528 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.528 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.528 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:31.528 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9b3f6b06-1956-4fbb-82ee-bef13ed17313 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.789 [2024-11-18 04:00:28.217919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:31.789 [2024-11-18 04:00:28.218164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:31.789 [2024-11-18 04:00:28.218178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:31.789 [2024-11-18 04:00:28.218453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:31.789 [2024-11-18 04:00:28.218614] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:31.789 [2024-11-18 04:00:28.218632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:31.789 [2024-11-18 04:00:28.218766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.789 NewBaseBdev 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.789 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.789 [ 00:11:31.789 { 00:11:31.789 "name": "NewBaseBdev", 00:11:31.789 "aliases": [ 00:11:31.789 "9b3f6b06-1956-4fbb-82ee-bef13ed17313" 00:11:31.789 ], 00:11:31.789 "product_name": "Malloc disk", 00:11:31.789 "block_size": 512, 00:11:31.789 "num_blocks": 65536, 00:11:31.789 "uuid": "9b3f6b06-1956-4fbb-82ee-bef13ed17313", 00:11:31.789 "assigned_rate_limits": { 00:11:31.789 "rw_ios_per_sec": 0, 00:11:31.789 "rw_mbytes_per_sec": 0, 00:11:31.789 "r_mbytes_per_sec": 0, 00:11:31.789 "w_mbytes_per_sec": 0 00:11:31.789 }, 00:11:31.789 "claimed": true, 00:11:31.789 "claim_type": "exclusive_write", 00:11:31.789 "zoned": false, 00:11:31.789 "supported_io_types": { 00:11:31.789 "read": true, 00:11:31.789 "write": true, 00:11:31.789 "unmap": true, 00:11:31.789 "flush": true, 00:11:31.789 "reset": true, 00:11:31.789 "nvme_admin": false, 00:11:31.789 "nvme_io": false, 00:11:31.789 "nvme_io_md": false, 00:11:31.789 "write_zeroes": true, 00:11:31.789 "zcopy": true, 00:11:31.789 "get_zone_info": false, 00:11:31.789 "zone_management": false, 00:11:31.789 "zone_append": false, 00:11:31.789 "compare": false, 00:11:31.789 "compare_and_write": false, 00:11:31.789 "abort": true, 00:11:31.789 "seek_hole": false, 00:11:31.789 "seek_data": false, 00:11:31.789 "copy": true, 00:11:31.789 "nvme_iov_md": false 00:11:31.789 }, 00:11:31.789 "memory_domains": [ 00:11:31.789 { 00:11:31.790 "dma_device_id": "system", 00:11:31.790 "dma_device_type": 1 00:11:31.790 }, 00:11:31.790 { 00:11:31.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.790 "dma_device_type": 2 00:11:31.790 } 00:11:31.790 ], 00:11:31.790 "driver_specific": {} 00:11:31.790 } 00:11:31.790 ] 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.790 "name": "Existed_Raid", 00:11:31.790 "uuid": "c6b2a5b7-d014-4c73-9d67-cf9c7831a8a2", 00:11:31.790 "strip_size_kb": 64, 00:11:31.790 "state": "online", 00:11:31.790 "raid_level": "concat", 00:11:31.790 "superblock": true, 00:11:31.790 "num_base_bdevs": 4, 00:11:31.790 "num_base_bdevs_discovered": 4, 00:11:31.790 "num_base_bdevs_operational": 4, 00:11:31.790 "base_bdevs_list": [ 00:11:31.790 { 00:11:31.790 "name": "NewBaseBdev", 00:11:31.790 "uuid": "9b3f6b06-1956-4fbb-82ee-bef13ed17313", 00:11:31.790 "is_configured": true, 00:11:31.790 "data_offset": 2048, 00:11:31.790 "data_size": 63488 00:11:31.790 }, 00:11:31.790 { 00:11:31.790 "name": "BaseBdev2", 00:11:31.790 "uuid": "d09f1dc5-be5d-4847-b11a-41143d5693ad", 00:11:31.790 "is_configured": true, 00:11:31.790 "data_offset": 2048, 00:11:31.790 "data_size": 63488 00:11:31.790 }, 00:11:31.790 { 00:11:31.790 "name": "BaseBdev3", 00:11:31.790 "uuid": "d644bd6d-379d-49f3-a9ea-de2784f54ec0", 00:11:31.790 "is_configured": true, 00:11:31.790 "data_offset": 2048, 00:11:31.790 "data_size": 63488 00:11:31.790 }, 00:11:31.790 { 00:11:31.790 "name": "BaseBdev4", 00:11:31.790 "uuid": "175cbcd0-5dd1-4ef2-9c07-4ab597411e4d", 00:11:31.790 "is_configured": true, 00:11:31.790 "data_offset": 2048, 00:11:31.790 "data_size": 63488 00:11:31.790 } 00:11:31.790 ] 00:11:31.790 }' 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.790 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.361 [2024-11-18 04:00:28.745462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.361 "name": "Existed_Raid", 00:11:32.361 "aliases": [ 00:11:32.361 "c6b2a5b7-d014-4c73-9d67-cf9c7831a8a2" 00:11:32.361 ], 00:11:32.361 "product_name": "Raid Volume", 00:11:32.361 "block_size": 512, 00:11:32.361 "num_blocks": 253952, 00:11:32.361 "uuid": "c6b2a5b7-d014-4c73-9d67-cf9c7831a8a2", 00:11:32.361 "assigned_rate_limits": { 00:11:32.361 "rw_ios_per_sec": 0, 00:11:32.361 "rw_mbytes_per_sec": 0, 00:11:32.361 "r_mbytes_per_sec": 0, 00:11:32.361 "w_mbytes_per_sec": 0 00:11:32.361 }, 00:11:32.361 "claimed": false, 00:11:32.361 "zoned": false, 00:11:32.361 "supported_io_types": { 00:11:32.361 "read": true, 00:11:32.361 "write": true, 00:11:32.361 "unmap": true, 00:11:32.361 "flush": true, 00:11:32.361 "reset": true, 00:11:32.361 "nvme_admin": false, 00:11:32.361 "nvme_io": false, 00:11:32.361 "nvme_io_md": false, 00:11:32.361 "write_zeroes": true, 00:11:32.361 "zcopy": false, 00:11:32.361 "get_zone_info": false, 00:11:32.361 "zone_management": false, 00:11:32.361 "zone_append": false, 00:11:32.361 "compare": false, 00:11:32.361 "compare_and_write": false, 00:11:32.361 "abort": false, 00:11:32.361 "seek_hole": false, 00:11:32.361 "seek_data": false, 00:11:32.361 "copy": false, 00:11:32.361 "nvme_iov_md": false 00:11:32.361 }, 00:11:32.361 "memory_domains": [ 00:11:32.361 { 00:11:32.361 "dma_device_id": "system", 00:11:32.361 "dma_device_type": 1 00:11:32.361 }, 00:11:32.361 { 00:11:32.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.361 "dma_device_type": 2 00:11:32.361 }, 00:11:32.361 { 00:11:32.361 "dma_device_id": "system", 00:11:32.361 "dma_device_type": 1 00:11:32.361 }, 00:11:32.361 { 00:11:32.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.361 "dma_device_type": 2 00:11:32.361 }, 00:11:32.361 { 00:11:32.361 "dma_device_id": "system", 00:11:32.361 "dma_device_type": 1 00:11:32.361 }, 00:11:32.361 { 00:11:32.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.361 "dma_device_type": 2 00:11:32.361 }, 00:11:32.361 { 00:11:32.361 "dma_device_id": "system", 00:11:32.361 "dma_device_type": 1 00:11:32.361 }, 00:11:32.361 { 00:11:32.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.361 "dma_device_type": 2 00:11:32.361 } 00:11:32.361 ], 00:11:32.361 "driver_specific": { 00:11:32.361 "raid": { 00:11:32.361 "uuid": "c6b2a5b7-d014-4c73-9d67-cf9c7831a8a2", 00:11:32.361 "strip_size_kb": 64, 00:11:32.361 "state": "online", 00:11:32.361 "raid_level": "concat", 00:11:32.361 "superblock": true, 00:11:32.361 "num_base_bdevs": 4, 00:11:32.361 "num_base_bdevs_discovered": 4, 00:11:32.361 "num_base_bdevs_operational": 4, 00:11:32.361 "base_bdevs_list": [ 00:11:32.361 { 00:11:32.361 "name": "NewBaseBdev", 00:11:32.361 "uuid": "9b3f6b06-1956-4fbb-82ee-bef13ed17313", 00:11:32.361 "is_configured": true, 00:11:32.361 "data_offset": 2048, 00:11:32.361 "data_size": 63488 00:11:32.361 }, 00:11:32.361 { 00:11:32.361 "name": "BaseBdev2", 00:11:32.361 "uuid": "d09f1dc5-be5d-4847-b11a-41143d5693ad", 00:11:32.361 "is_configured": true, 00:11:32.361 "data_offset": 2048, 00:11:32.361 "data_size": 63488 00:11:32.361 }, 00:11:32.361 { 00:11:32.361 "name": "BaseBdev3", 00:11:32.361 "uuid": "d644bd6d-379d-49f3-a9ea-de2784f54ec0", 00:11:32.361 "is_configured": true, 00:11:32.361 "data_offset": 2048, 00:11:32.361 "data_size": 63488 00:11:32.361 }, 00:11:32.361 { 00:11:32.361 "name": "BaseBdev4", 00:11:32.361 "uuid": "175cbcd0-5dd1-4ef2-9c07-4ab597411e4d", 00:11:32.361 "is_configured": true, 00:11:32.361 "data_offset": 2048, 00:11:32.361 "data_size": 63488 00:11:32.361 } 00:11:32.361 ] 00:11:32.361 } 00:11:32.361 } 00:11:32.361 }' 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:32.361 BaseBdev2 00:11:32.361 BaseBdev3 00:11:32.361 BaseBdev4' 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.361 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.362 04:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.622 [2024-11-18 04:00:29.024628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.622 [2024-11-18 04:00:29.024683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.622 [2024-11-18 04:00:29.024799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.622 [2024-11-18 04:00:29.024905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.622 [2024-11-18 04:00:29.024921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71940 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71940 ']' 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71940 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71940 00:11:32.622 killing process with pid 71940 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71940' 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71940 00:11:32.622 [2024-11-18 04:00:29.062615] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.622 04:00:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71940 00:11:32.880 [2024-11-18 04:00:29.480470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.255 04:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:34.255 00:11:34.255 real 0m11.744s 00:11:34.255 user 0m18.428s 00:11:34.255 sys 0m2.177s 00:11:34.255 04:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.255 04:00:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.255 ************************************ 00:11:34.255 END TEST raid_state_function_test_sb 00:11:34.255 ************************************ 00:11:34.255 04:00:30 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:34.255 04:00:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:34.255 04:00:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.255 04:00:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.255 ************************************ 00:11:34.255 START TEST raid_superblock_test 00:11:34.255 ************************************ 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72615 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72615 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72615 ']' 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.255 04:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.256 04:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.256 04:00:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.256 [2024-11-18 04:00:30.859643] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:34.256 [2024-11-18 04:00:30.859772] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72615 ] 00:11:34.515 [2024-11-18 04:00:31.029517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.773 [2024-11-18 04:00:31.166332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.774 [2024-11-18 04:00:31.395985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.774 [2024-11-18 04:00:31.396058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.342 malloc1 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.342 [2024-11-18 04:00:31.744860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:35.342 [2024-11-18 04:00:31.744932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.342 [2024-11-18 04:00:31.744961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:35.342 [2024-11-18 04:00:31.744971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.342 [2024-11-18 04:00:31.747378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.342 [2024-11-18 04:00:31.747407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:35.342 pt1 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:35.342 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 malloc2 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 [2024-11-18 04:00:31.806086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.343 [2024-11-18 04:00:31.806145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.343 [2024-11-18 04:00:31.806170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:35.343 [2024-11-18 04:00:31.806180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.343 [2024-11-18 04:00:31.808554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.343 [2024-11-18 04:00:31.808583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.343 pt2 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 malloc3 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 [2024-11-18 04:00:31.883109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:35.343 [2024-11-18 04:00:31.883169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.343 [2024-11-18 04:00:31.883193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:35.343 [2024-11-18 04:00:31.883202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.343 [2024-11-18 04:00:31.885665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.343 [2024-11-18 04:00:31.885698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:35.343 pt3 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 malloc4 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 [2024-11-18 04:00:31.946141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:35.343 [2024-11-18 04:00:31.946201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.343 [2024-11-18 04:00:31.946224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:35.343 [2024-11-18 04:00:31.946234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.343 [2024-11-18 04:00:31.948682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.343 [2024-11-18 04:00:31.948715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:35.343 pt4 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 [2024-11-18 04:00:31.958154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:35.343 [2024-11-18 04:00:31.960240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.343 [2024-11-18 04:00:31.960305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:35.343 [2024-11-18 04:00:31.960370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:35.343 [2024-11-18 04:00:31.960573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:35.343 [2024-11-18 04:00:31.960592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:35.343 [2024-11-18 04:00:31.960884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:35.343 [2024-11-18 04:00:31.961065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:35.343 [2024-11-18 04:00:31.961084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:35.343 [2024-11-18 04:00:31.961246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.343 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.344 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.603 04:00:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.603 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.603 "name": "raid_bdev1", 00:11:35.603 "uuid": "1f06b2dd-1647-4766-bbc9-43e537f342e4", 00:11:35.603 "strip_size_kb": 64, 00:11:35.603 "state": "online", 00:11:35.603 "raid_level": "concat", 00:11:35.603 "superblock": true, 00:11:35.603 "num_base_bdevs": 4, 00:11:35.603 "num_base_bdevs_discovered": 4, 00:11:35.603 "num_base_bdevs_operational": 4, 00:11:35.603 "base_bdevs_list": [ 00:11:35.603 { 00:11:35.603 "name": "pt1", 00:11:35.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.603 "is_configured": true, 00:11:35.603 "data_offset": 2048, 00:11:35.603 "data_size": 63488 00:11:35.603 }, 00:11:35.603 { 00:11:35.603 "name": "pt2", 00:11:35.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.603 "is_configured": true, 00:11:35.603 "data_offset": 2048, 00:11:35.603 "data_size": 63488 00:11:35.603 }, 00:11:35.603 { 00:11:35.603 "name": "pt3", 00:11:35.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.603 "is_configured": true, 00:11:35.603 "data_offset": 2048, 00:11:35.603 "data_size": 63488 00:11:35.603 }, 00:11:35.603 { 00:11:35.603 "name": "pt4", 00:11:35.603 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.603 "is_configured": true, 00:11:35.603 "data_offset": 2048, 00:11:35.603 "data_size": 63488 00:11:35.603 } 00:11:35.603 ] 00:11:35.603 }' 00:11:35.604 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.604 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.863 [2024-11-18 04:00:32.365812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.863 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.863 "name": "raid_bdev1", 00:11:35.863 "aliases": [ 00:11:35.863 "1f06b2dd-1647-4766-bbc9-43e537f342e4" 00:11:35.863 ], 00:11:35.863 "product_name": "Raid Volume", 00:11:35.863 "block_size": 512, 00:11:35.863 "num_blocks": 253952, 00:11:35.863 "uuid": "1f06b2dd-1647-4766-bbc9-43e537f342e4", 00:11:35.863 "assigned_rate_limits": { 00:11:35.864 "rw_ios_per_sec": 0, 00:11:35.864 "rw_mbytes_per_sec": 0, 00:11:35.864 "r_mbytes_per_sec": 0, 00:11:35.864 "w_mbytes_per_sec": 0 00:11:35.864 }, 00:11:35.864 "claimed": false, 00:11:35.864 "zoned": false, 00:11:35.864 "supported_io_types": { 00:11:35.864 "read": true, 00:11:35.864 "write": true, 00:11:35.864 "unmap": true, 00:11:35.864 "flush": true, 00:11:35.864 "reset": true, 00:11:35.864 "nvme_admin": false, 00:11:35.864 "nvme_io": false, 00:11:35.864 "nvme_io_md": false, 00:11:35.864 "write_zeroes": true, 00:11:35.864 "zcopy": false, 00:11:35.864 "get_zone_info": false, 00:11:35.864 "zone_management": false, 00:11:35.864 "zone_append": false, 00:11:35.864 "compare": false, 00:11:35.864 "compare_and_write": false, 00:11:35.864 "abort": false, 00:11:35.864 "seek_hole": false, 00:11:35.864 "seek_data": false, 00:11:35.864 "copy": false, 00:11:35.864 "nvme_iov_md": false 00:11:35.864 }, 00:11:35.864 "memory_domains": [ 00:11:35.864 { 00:11:35.864 "dma_device_id": "system", 00:11:35.864 "dma_device_type": 1 00:11:35.864 }, 00:11:35.864 { 00:11:35.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.864 "dma_device_type": 2 00:11:35.864 }, 00:11:35.864 { 00:11:35.864 "dma_device_id": "system", 00:11:35.864 "dma_device_type": 1 00:11:35.864 }, 00:11:35.864 { 00:11:35.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.864 "dma_device_type": 2 00:11:35.864 }, 00:11:35.864 { 00:11:35.864 "dma_device_id": "system", 00:11:35.864 "dma_device_type": 1 00:11:35.864 }, 00:11:35.864 { 00:11:35.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.864 "dma_device_type": 2 00:11:35.864 }, 00:11:35.864 { 00:11:35.864 "dma_device_id": "system", 00:11:35.864 "dma_device_type": 1 00:11:35.864 }, 00:11:35.864 { 00:11:35.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.864 "dma_device_type": 2 00:11:35.864 } 00:11:35.864 ], 00:11:35.864 "driver_specific": { 00:11:35.864 "raid": { 00:11:35.864 "uuid": "1f06b2dd-1647-4766-bbc9-43e537f342e4", 00:11:35.864 "strip_size_kb": 64, 00:11:35.864 "state": "online", 00:11:35.864 "raid_level": "concat", 00:11:35.864 "superblock": true, 00:11:35.864 "num_base_bdevs": 4, 00:11:35.864 "num_base_bdevs_discovered": 4, 00:11:35.864 "num_base_bdevs_operational": 4, 00:11:35.864 "base_bdevs_list": [ 00:11:35.864 { 00:11:35.864 "name": "pt1", 00:11:35.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.864 "is_configured": true, 00:11:35.864 "data_offset": 2048, 00:11:35.864 "data_size": 63488 00:11:35.864 }, 00:11:35.864 { 00:11:35.864 "name": "pt2", 00:11:35.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.864 "is_configured": true, 00:11:35.864 "data_offset": 2048, 00:11:35.864 "data_size": 63488 00:11:35.864 }, 00:11:35.864 { 00:11:35.864 "name": "pt3", 00:11:35.864 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.864 "is_configured": true, 00:11:35.864 "data_offset": 2048, 00:11:35.864 "data_size": 63488 00:11:35.864 }, 00:11:35.864 { 00:11:35.864 "name": "pt4", 00:11:35.864 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.864 "is_configured": true, 00:11:35.864 "data_offset": 2048, 00:11:35.864 "data_size": 63488 00:11:35.864 } 00:11:35.864 ] 00:11:35.864 } 00:11:35.864 } 00:11:35.864 }' 00:11:35.864 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.864 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:35.864 pt2 00:11:35.864 pt3 00:11:35.864 pt4' 00:11:35.864 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.864 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.864 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.864 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:35.864 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.864 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.864 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.864 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.123 [2024-11-18 04:00:32.681239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1f06b2dd-1647-4766-bbc9-43e537f342e4 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1f06b2dd-1647-4766-bbc9-43e537f342e4 ']' 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.123 [2024-11-18 04:00:32.724843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.123 [2024-11-18 04:00:32.724885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.123 [2024-11-18 04:00:32.724989] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.123 [2024-11-18 04:00:32.725066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.123 [2024-11-18 04:00:32.725083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:36.123 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:36.382 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.383 [2024-11-18 04:00:32.880629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:36.383 [2024-11-18 04:00:32.882843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:36.383 [2024-11-18 04:00:32.882901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:36.383 [2024-11-18 04:00:32.882935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:36.383 [2024-11-18 04:00:32.882995] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:36.383 [2024-11-18 04:00:32.883061] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:36.383 [2024-11-18 04:00:32.883079] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:36.383 [2024-11-18 04:00:32.883097] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:36.383 [2024-11-18 04:00:32.883110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.383 [2024-11-18 04:00:32.883122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:36.383 request: 00:11:36.383 { 00:11:36.383 "name": "raid_bdev1", 00:11:36.383 "raid_level": "concat", 00:11:36.383 "base_bdevs": [ 00:11:36.383 "malloc1", 00:11:36.383 "malloc2", 00:11:36.383 "malloc3", 00:11:36.383 "malloc4" 00:11:36.383 ], 00:11:36.383 "strip_size_kb": 64, 00:11:36.383 "superblock": false, 00:11:36.383 "method": "bdev_raid_create", 00:11:36.383 "req_id": 1 00:11:36.383 } 00:11:36.383 Got JSON-RPC error response 00:11:36.383 response: 00:11:36.383 { 00:11:36.383 "code": -17, 00:11:36.383 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:36.383 } 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.383 [2024-11-18 04:00:32.944401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.383 [2024-11-18 04:00:32.944560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.383 [2024-11-18 04:00:32.944599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:36.383 [2024-11-18 04:00:32.944643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.383 [2024-11-18 04:00:32.947190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.383 [2024-11-18 04:00:32.947267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:36.383 [2024-11-18 04:00:32.947380] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:36.383 [2024-11-18 04:00:32.947475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:36.383 pt1 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.383 04:00:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.383 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.383 "name": "raid_bdev1", 00:11:36.383 "uuid": "1f06b2dd-1647-4766-bbc9-43e537f342e4", 00:11:36.383 "strip_size_kb": 64, 00:11:36.383 "state": "configuring", 00:11:36.383 "raid_level": "concat", 00:11:36.383 "superblock": true, 00:11:36.383 "num_base_bdevs": 4, 00:11:36.383 "num_base_bdevs_discovered": 1, 00:11:36.383 "num_base_bdevs_operational": 4, 00:11:36.383 "base_bdevs_list": [ 00:11:36.383 { 00:11:36.383 "name": "pt1", 00:11:36.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.383 "is_configured": true, 00:11:36.384 "data_offset": 2048, 00:11:36.384 "data_size": 63488 00:11:36.384 }, 00:11:36.384 { 00:11:36.384 "name": null, 00:11:36.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.384 "is_configured": false, 00:11:36.384 "data_offset": 2048, 00:11:36.384 "data_size": 63488 00:11:36.384 }, 00:11:36.384 { 00:11:36.384 "name": null, 00:11:36.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.384 "is_configured": false, 00:11:36.384 "data_offset": 2048, 00:11:36.384 "data_size": 63488 00:11:36.384 }, 00:11:36.384 { 00:11:36.384 "name": null, 00:11:36.384 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.384 "is_configured": false, 00:11:36.384 "data_offset": 2048, 00:11:36.384 "data_size": 63488 00:11:36.384 } 00:11:36.384 ] 00:11:36.384 }' 00:11:36.384 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.384 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.952 [2024-11-18 04:00:33.407732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:36.952 [2024-11-18 04:00:33.407849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.952 [2024-11-18 04:00:33.407876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:36.952 [2024-11-18 04:00:33.407889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.952 [2024-11-18 04:00:33.408408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.952 [2024-11-18 04:00:33.408439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:36.952 [2024-11-18 04:00:33.408529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:36.952 [2024-11-18 04:00:33.408558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.952 pt2 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.952 [2024-11-18 04:00:33.419671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.952 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.952 "name": "raid_bdev1", 00:11:36.952 "uuid": "1f06b2dd-1647-4766-bbc9-43e537f342e4", 00:11:36.952 "strip_size_kb": 64, 00:11:36.952 "state": "configuring", 00:11:36.952 "raid_level": "concat", 00:11:36.952 "superblock": true, 00:11:36.952 "num_base_bdevs": 4, 00:11:36.952 "num_base_bdevs_discovered": 1, 00:11:36.952 "num_base_bdevs_operational": 4, 00:11:36.952 "base_bdevs_list": [ 00:11:36.952 { 00:11:36.952 "name": "pt1", 00:11:36.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.952 "is_configured": true, 00:11:36.952 "data_offset": 2048, 00:11:36.952 "data_size": 63488 00:11:36.952 }, 00:11:36.952 { 00:11:36.952 "name": null, 00:11:36.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.952 "is_configured": false, 00:11:36.952 "data_offset": 0, 00:11:36.952 "data_size": 63488 00:11:36.952 }, 00:11:36.953 { 00:11:36.953 "name": null, 00:11:36.953 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.953 "is_configured": false, 00:11:36.953 "data_offset": 2048, 00:11:36.953 "data_size": 63488 00:11:36.953 }, 00:11:36.953 { 00:11:36.953 "name": null, 00:11:36.953 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.953 "is_configured": false, 00:11:36.953 "data_offset": 2048, 00:11:36.953 "data_size": 63488 00:11:36.953 } 00:11:36.953 ] 00:11:36.953 }' 00:11:36.953 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.953 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.522 [2024-11-18 04:00:33.878921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.522 [2024-11-18 04:00:33.879096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.522 [2024-11-18 04:00:33.879138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:37.522 [2024-11-18 04:00:33.879166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.522 [2024-11-18 04:00:33.879726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.522 [2024-11-18 04:00:33.879783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.522 [2024-11-18 04:00:33.879934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:37.522 [2024-11-18 04:00:33.879994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.522 pt2 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.522 [2024-11-18 04:00:33.886809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:37.522 [2024-11-18 04:00:33.886906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.522 [2024-11-18 04:00:33.886955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:37.522 [2024-11-18 04:00:33.886987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.522 [2024-11-18 04:00:33.887379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.522 [2024-11-18 04:00:33.887431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:37.522 [2024-11-18 04:00:33.887509] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:37.522 [2024-11-18 04:00:33.887559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:37.522 pt3 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.522 [2024-11-18 04:00:33.894769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:37.522 [2024-11-18 04:00:33.894858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.522 [2024-11-18 04:00:33.894896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:37.522 [2024-11-18 04:00:33.894921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.522 [2024-11-18 04:00:33.895310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.522 [2024-11-18 04:00:33.895366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:37.522 [2024-11-18 04:00:33.895450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:37.522 [2024-11-18 04:00:33.895492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:37.522 [2024-11-18 04:00:33.895658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:37.522 [2024-11-18 04:00:33.895694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:37.522 [2024-11-18 04:00:33.895963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:37.522 [2024-11-18 04:00:33.896144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:37.522 [2024-11-18 04:00:33.896186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:37.522 [2024-11-18 04:00:33.896347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.522 pt4 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.522 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.523 "name": "raid_bdev1", 00:11:37.523 "uuid": "1f06b2dd-1647-4766-bbc9-43e537f342e4", 00:11:37.523 "strip_size_kb": 64, 00:11:37.523 "state": "online", 00:11:37.523 "raid_level": "concat", 00:11:37.523 "superblock": true, 00:11:37.523 "num_base_bdevs": 4, 00:11:37.523 "num_base_bdevs_discovered": 4, 00:11:37.523 "num_base_bdevs_operational": 4, 00:11:37.523 "base_bdevs_list": [ 00:11:37.523 { 00:11:37.523 "name": "pt1", 00:11:37.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.523 "is_configured": true, 00:11:37.523 "data_offset": 2048, 00:11:37.523 "data_size": 63488 00:11:37.523 }, 00:11:37.523 { 00:11:37.523 "name": "pt2", 00:11:37.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.523 "is_configured": true, 00:11:37.523 "data_offset": 2048, 00:11:37.523 "data_size": 63488 00:11:37.523 }, 00:11:37.523 { 00:11:37.523 "name": "pt3", 00:11:37.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.523 "is_configured": true, 00:11:37.523 "data_offset": 2048, 00:11:37.523 "data_size": 63488 00:11:37.523 }, 00:11:37.523 { 00:11:37.523 "name": "pt4", 00:11:37.523 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:37.523 "is_configured": true, 00:11:37.523 "data_offset": 2048, 00:11:37.523 "data_size": 63488 00:11:37.523 } 00:11:37.523 ] 00:11:37.523 }' 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.523 04:00:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:37.783 [2024-11-18 04:00:34.298532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.783 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:37.783 "name": "raid_bdev1", 00:11:37.783 "aliases": [ 00:11:37.783 "1f06b2dd-1647-4766-bbc9-43e537f342e4" 00:11:37.783 ], 00:11:37.783 "product_name": "Raid Volume", 00:11:37.783 "block_size": 512, 00:11:37.783 "num_blocks": 253952, 00:11:37.783 "uuid": "1f06b2dd-1647-4766-bbc9-43e537f342e4", 00:11:37.783 "assigned_rate_limits": { 00:11:37.783 "rw_ios_per_sec": 0, 00:11:37.783 "rw_mbytes_per_sec": 0, 00:11:37.783 "r_mbytes_per_sec": 0, 00:11:37.783 "w_mbytes_per_sec": 0 00:11:37.783 }, 00:11:37.783 "claimed": false, 00:11:37.783 "zoned": false, 00:11:37.783 "supported_io_types": { 00:11:37.783 "read": true, 00:11:37.783 "write": true, 00:11:37.783 "unmap": true, 00:11:37.783 "flush": true, 00:11:37.783 "reset": true, 00:11:37.783 "nvme_admin": false, 00:11:37.783 "nvme_io": false, 00:11:37.783 "nvme_io_md": false, 00:11:37.783 "write_zeroes": true, 00:11:37.783 "zcopy": false, 00:11:37.783 "get_zone_info": false, 00:11:37.783 "zone_management": false, 00:11:37.783 "zone_append": false, 00:11:37.783 "compare": false, 00:11:37.783 "compare_and_write": false, 00:11:37.783 "abort": false, 00:11:37.783 "seek_hole": false, 00:11:37.783 "seek_data": false, 00:11:37.783 "copy": false, 00:11:37.783 "nvme_iov_md": false 00:11:37.783 }, 00:11:37.783 "memory_domains": [ 00:11:37.783 { 00:11:37.783 "dma_device_id": "system", 00:11:37.783 "dma_device_type": 1 00:11:37.783 }, 00:11:37.783 { 00:11:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.783 "dma_device_type": 2 00:11:37.783 }, 00:11:37.783 { 00:11:37.783 "dma_device_id": "system", 00:11:37.783 "dma_device_type": 1 00:11:37.783 }, 00:11:37.783 { 00:11:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.783 "dma_device_type": 2 00:11:37.783 }, 00:11:37.783 { 00:11:37.783 "dma_device_id": "system", 00:11:37.783 "dma_device_type": 1 00:11:37.783 }, 00:11:37.783 { 00:11:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.783 "dma_device_type": 2 00:11:37.783 }, 00:11:37.783 { 00:11:37.783 "dma_device_id": "system", 00:11:37.783 "dma_device_type": 1 00:11:37.783 }, 00:11:37.783 { 00:11:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.783 "dma_device_type": 2 00:11:37.783 } 00:11:37.783 ], 00:11:37.783 "driver_specific": { 00:11:37.783 "raid": { 00:11:37.783 "uuid": "1f06b2dd-1647-4766-bbc9-43e537f342e4", 00:11:37.783 "strip_size_kb": 64, 00:11:37.783 "state": "online", 00:11:37.783 "raid_level": "concat", 00:11:37.783 "superblock": true, 00:11:37.783 "num_base_bdevs": 4, 00:11:37.783 "num_base_bdevs_discovered": 4, 00:11:37.783 "num_base_bdevs_operational": 4, 00:11:37.783 "base_bdevs_list": [ 00:11:37.783 { 00:11:37.783 "name": "pt1", 00:11:37.783 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.783 "is_configured": true, 00:11:37.783 "data_offset": 2048, 00:11:37.783 "data_size": 63488 00:11:37.783 }, 00:11:37.783 { 00:11:37.783 "name": "pt2", 00:11:37.783 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.783 "is_configured": true, 00:11:37.783 "data_offset": 2048, 00:11:37.783 "data_size": 63488 00:11:37.783 }, 00:11:37.783 { 00:11:37.783 "name": "pt3", 00:11:37.783 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.783 "is_configured": true, 00:11:37.783 "data_offset": 2048, 00:11:37.783 "data_size": 63488 00:11:37.783 }, 00:11:37.783 { 00:11:37.783 "name": "pt4", 00:11:37.783 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:37.783 "is_configured": true, 00:11:37.783 "data_offset": 2048, 00:11:37.783 "data_size": 63488 00:11:37.784 } 00:11:37.784 ] 00:11:37.784 } 00:11:37.784 } 00:11:37.784 }' 00:11:37.784 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:37.784 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:37.784 pt2 00:11:37.784 pt3 00:11:37.784 pt4' 00:11:37.784 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.044 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.044 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.044 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:38.044 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.044 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.044 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.044 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.044 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.044 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.044 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:38.045 [2024-11-18 04:00:34.645864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1f06b2dd-1647-4766-bbc9-43e537f342e4 '!=' 1f06b2dd-1647-4766-bbc9-43e537f342e4 ']' 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72615 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72615 ']' 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72615 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.045 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72615 00:11:38.304 killing process with pid 72615 00:11:38.304 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.304 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.304 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72615' 00:11:38.304 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72615 00:11:38.304 [2024-11-18 04:00:34.712188] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.305 [2024-11-18 04:00:34.712305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.305 04:00:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72615 00:11:38.305 [2024-11-18 04:00:34.712387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.305 [2024-11-18 04:00:34.712398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:38.563 [2024-11-18 04:00:35.139148] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.940 04:00:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:39.940 00:11:39.940 real 0m5.541s 00:11:39.940 user 0m7.763s 00:11:39.940 sys 0m1.046s 00:11:39.940 04:00:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.940 04:00:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.940 ************************************ 00:11:39.940 END TEST raid_superblock_test 00:11:39.940 ************************************ 00:11:39.940 04:00:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:39.940 04:00:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:39.940 04:00:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.940 04:00:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.940 ************************************ 00:11:39.940 START TEST raid_read_error_test 00:11:39.940 ************************************ 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bRpltRd5wx 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72874 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72874 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72874 ']' 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.940 04:00:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.940 [2024-11-18 04:00:36.495259] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:39.940 [2024-11-18 04:00:36.495404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72874 ] 00:11:40.199 [2024-11-18 04:00:36.674408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.199 [2024-11-18 04:00:36.809713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.458 [2024-11-18 04:00:37.044769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.458 [2024-11-18 04:00:37.044855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.717 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.717 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:40.717 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.717 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:40.717 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.717 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 BaseBdev1_malloc 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 true 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 [2024-11-18 04:00:37.398533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:40.976 [2024-11-18 04:00:37.398610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.976 [2024-11-18 04:00:37.398634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:40.976 [2024-11-18 04:00:37.398646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.976 [2024-11-18 04:00:37.401116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.976 [2024-11-18 04:00:37.401232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:40.976 BaseBdev1 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 BaseBdev2_malloc 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 true 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 [2024-11-18 04:00:37.471407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:40.976 [2024-11-18 04:00:37.471486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.976 [2024-11-18 04:00:37.471506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:40.976 [2024-11-18 04:00:37.471519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.976 [2024-11-18 04:00:37.474053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.976 [2024-11-18 04:00:37.474099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:40.976 BaseBdev2 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 BaseBdev3_malloc 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 true 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 [2024-11-18 04:00:37.557786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:40.976 [2024-11-18 04:00:37.557930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.976 [2024-11-18 04:00:37.557951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:40.976 [2024-11-18 04:00:37.557962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.976 [2024-11-18 04:00:37.560276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.976 [2024-11-18 04:00:37.560318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:40.976 BaseBdev3 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.976 BaseBdev4_malloc 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.976 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.235 true 00:11:41.235 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.235 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:41.235 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.235 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.235 [2024-11-18 04:00:37.631017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:41.235 [2024-11-18 04:00:37.631083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.235 [2024-11-18 04:00:37.631101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:41.235 [2024-11-18 04:00:37.631112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.235 [2024-11-18 04:00:37.633391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.235 [2024-11-18 04:00:37.633502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:41.235 BaseBdev4 00:11:41.235 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.235 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:41.235 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.235 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.235 [2024-11-18 04:00:37.643064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.235 [2024-11-18 04:00:37.645185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.235 [2024-11-18 04:00:37.645260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.236 [2024-11-18 04:00:37.645325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:41.236 [2024-11-18 04:00:37.645555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:41.236 [2024-11-18 04:00:37.645575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:41.236 [2024-11-18 04:00:37.645805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:41.236 [2024-11-18 04:00:37.645980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:41.236 [2024-11-18 04:00:37.645992] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:41.236 [2024-11-18 04:00:37.646136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.236 "name": "raid_bdev1", 00:11:41.236 "uuid": "0c07bafa-a99c-4ca7-984f-a814efe52575", 00:11:41.236 "strip_size_kb": 64, 00:11:41.236 "state": "online", 00:11:41.236 "raid_level": "concat", 00:11:41.236 "superblock": true, 00:11:41.236 "num_base_bdevs": 4, 00:11:41.236 "num_base_bdevs_discovered": 4, 00:11:41.236 "num_base_bdevs_operational": 4, 00:11:41.236 "base_bdevs_list": [ 00:11:41.236 { 00:11:41.236 "name": "BaseBdev1", 00:11:41.236 "uuid": "f0c8d26f-87a5-56c3-89f2-72daef460dd8", 00:11:41.236 "is_configured": true, 00:11:41.236 "data_offset": 2048, 00:11:41.236 "data_size": 63488 00:11:41.236 }, 00:11:41.236 { 00:11:41.236 "name": "BaseBdev2", 00:11:41.236 "uuid": "7b352ea1-ba3e-5132-8399-beaf9fd19742", 00:11:41.236 "is_configured": true, 00:11:41.236 "data_offset": 2048, 00:11:41.236 "data_size": 63488 00:11:41.236 }, 00:11:41.236 { 00:11:41.236 "name": "BaseBdev3", 00:11:41.236 "uuid": "89b25f7a-4f20-5f62-9a5f-e847b7033d63", 00:11:41.236 "is_configured": true, 00:11:41.236 "data_offset": 2048, 00:11:41.236 "data_size": 63488 00:11:41.236 }, 00:11:41.236 { 00:11:41.236 "name": "BaseBdev4", 00:11:41.236 "uuid": "527b797b-61b7-51d2-8c90-aadc4006520f", 00:11:41.236 "is_configured": true, 00:11:41.236 "data_offset": 2048, 00:11:41.236 "data_size": 63488 00:11:41.236 } 00:11:41.236 ] 00:11:41.236 }' 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.236 04:00:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.494 04:00:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:41.494 04:00:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:41.754 [2024-11-18 04:00:38.203536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.701 "name": "raid_bdev1", 00:11:42.701 "uuid": "0c07bafa-a99c-4ca7-984f-a814efe52575", 00:11:42.701 "strip_size_kb": 64, 00:11:42.701 "state": "online", 00:11:42.701 "raid_level": "concat", 00:11:42.701 "superblock": true, 00:11:42.701 "num_base_bdevs": 4, 00:11:42.701 "num_base_bdevs_discovered": 4, 00:11:42.701 "num_base_bdevs_operational": 4, 00:11:42.701 "base_bdevs_list": [ 00:11:42.701 { 00:11:42.701 "name": "BaseBdev1", 00:11:42.701 "uuid": "f0c8d26f-87a5-56c3-89f2-72daef460dd8", 00:11:42.701 "is_configured": true, 00:11:42.701 "data_offset": 2048, 00:11:42.701 "data_size": 63488 00:11:42.701 }, 00:11:42.701 { 00:11:42.701 "name": "BaseBdev2", 00:11:42.701 "uuid": "7b352ea1-ba3e-5132-8399-beaf9fd19742", 00:11:42.701 "is_configured": true, 00:11:42.701 "data_offset": 2048, 00:11:42.701 "data_size": 63488 00:11:42.701 }, 00:11:42.701 { 00:11:42.701 "name": "BaseBdev3", 00:11:42.701 "uuid": "89b25f7a-4f20-5f62-9a5f-e847b7033d63", 00:11:42.701 "is_configured": true, 00:11:42.701 "data_offset": 2048, 00:11:42.701 "data_size": 63488 00:11:42.701 }, 00:11:42.701 { 00:11:42.701 "name": "BaseBdev4", 00:11:42.701 "uuid": "527b797b-61b7-51d2-8c90-aadc4006520f", 00:11:42.701 "is_configured": true, 00:11:42.701 "data_offset": 2048, 00:11:42.701 "data_size": 63488 00:11:42.701 } 00:11:42.701 ] 00:11:42.701 }' 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.701 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.960 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.960 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.960 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.960 [2024-11-18 04:00:39.580591] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.960 [2024-11-18 04:00:39.580643] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.960 [2024-11-18 04:00:39.583280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.960 [2024-11-18 04:00:39.583348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.960 [2024-11-18 04:00:39.583397] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.960 [2024-11-18 04:00:39.583413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:42.960 { 00:11:42.960 "results": [ 00:11:42.960 { 00:11:42.960 "job": "raid_bdev1", 00:11:42.960 "core_mask": "0x1", 00:11:42.960 "workload": "randrw", 00:11:42.960 "percentage": 50, 00:11:42.960 "status": "finished", 00:11:42.960 "queue_depth": 1, 00:11:42.960 "io_size": 131072, 00:11:42.960 "runtime": 1.377548, 00:11:42.960 "iops": 13927.645352466847, 00:11:42.960 "mibps": 1740.9556690583559, 00:11:42.960 "io_failed": 1, 00:11:42.960 "io_timeout": 0, 00:11:42.960 "avg_latency_us": 101.28474706423086, 00:11:42.960 "min_latency_us": 24.705676855895195, 00:11:42.960 "max_latency_us": 1330.7528384279476 00:11:42.960 } 00:11:42.960 ], 00:11:42.960 "core_count": 1 00:11:42.960 } 00:11:42.960 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.960 04:00:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72874 00:11:42.960 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72874 ']' 00:11:42.960 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72874 00:11:42.960 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:42.960 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.960 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72874 00:11:43.219 killing process with pid 72874 00:11:43.219 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.219 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.219 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72874' 00:11:43.219 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72874 00:11:43.219 [2024-11-18 04:00:39.629380] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.219 04:00:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72874 00:11:43.478 [2024-11-18 04:00:39.977383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.855 04:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bRpltRd5wx 00:11:44.856 04:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:44.856 04:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:44.856 04:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:44.856 04:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:44.856 ************************************ 00:11:44.856 END TEST raid_read_error_test 00:11:44.856 ************************************ 00:11:44.856 04:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:44.856 04:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:44.856 04:00:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:44.856 00:11:44.856 real 0m4.846s 00:11:44.856 user 0m5.609s 00:11:44.856 sys 0m0.692s 00:11:44.856 04:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.856 04:00:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.856 04:00:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:44.856 04:00:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:44.856 04:00:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.856 04:00:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.856 ************************************ 00:11:44.856 START TEST raid_write_error_test 00:11:44.856 ************************************ 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hlyeQBxrvV 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73025 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73025 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73025 ']' 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.856 04:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.856 [2024-11-18 04:00:41.400557] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:44.856 [2024-11-18 04:00:41.400686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73025 ] 00:11:45.115 [2024-11-18 04:00:41.572028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.115 [2024-11-18 04:00:41.709843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.373 [2024-11-18 04:00:41.944018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.373 [2024-11-18 04:00:41.944072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.632 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.632 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.632 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.632 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.632 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.632 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 BaseBdev1_malloc 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 true 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 [2024-11-18 04:00:42.295710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:45.891 [2024-11-18 04:00:42.295879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.891 [2024-11-18 04:00:42.295925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:45.891 [2024-11-18 04:00:42.295960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.891 [2024-11-18 04:00:42.298268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.891 [2024-11-18 04:00:42.298339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.891 BaseBdev1 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 BaseBdev2_malloc 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 true 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 [2024-11-18 04:00:42.368650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:45.891 [2024-11-18 04:00:42.368720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.891 [2024-11-18 04:00:42.368737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:45.891 [2024-11-18 04:00:42.368749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.891 [2024-11-18 04:00:42.371042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.891 [2024-11-18 04:00:42.371080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.891 BaseBdev2 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 BaseBdev3_malloc 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 true 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 [2024-11-18 04:00:42.455209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:45.891 [2024-11-18 04:00:42.455272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.891 [2024-11-18 04:00:42.455289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:45.891 [2024-11-18 04:00:42.455299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.891 [2024-11-18 04:00:42.457623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.891 [2024-11-18 04:00:42.457733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:45.891 BaseBdev3 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 BaseBdev4_malloc 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 true 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.891 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.891 [2024-11-18 04:00:42.528463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:45.891 [2024-11-18 04:00:42.528535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.891 [2024-11-18 04:00:42.528555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:45.891 [2024-11-18 04:00:42.528567] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.151 [2024-11-18 04:00:42.530951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.151 [2024-11-18 04:00:42.530987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:46.151 BaseBdev4 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.151 [2024-11-18 04:00:42.540509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.151 [2024-11-18 04:00:42.542527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.151 [2024-11-18 04:00:42.542679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.151 [2024-11-18 04:00:42.542749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:46.151 [2024-11-18 04:00:42.542979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:46.151 [2024-11-18 04:00:42.542992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:46.151 [2024-11-18 04:00:42.543221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:46.151 [2024-11-18 04:00:42.543377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:46.151 [2024-11-18 04:00:42.543388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:46.151 [2024-11-18 04:00:42.543524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.151 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.151 "name": "raid_bdev1", 00:11:46.151 "uuid": "fbaf65e6-e5c2-4857-a3b3-204440f31156", 00:11:46.151 "strip_size_kb": 64, 00:11:46.151 "state": "online", 00:11:46.151 "raid_level": "concat", 00:11:46.151 "superblock": true, 00:11:46.151 "num_base_bdevs": 4, 00:11:46.151 "num_base_bdevs_discovered": 4, 00:11:46.151 "num_base_bdevs_operational": 4, 00:11:46.151 "base_bdevs_list": [ 00:11:46.151 { 00:11:46.151 "name": "BaseBdev1", 00:11:46.151 "uuid": "7b3baedc-86e9-5ae6-82f5-b155183173f3", 00:11:46.151 "is_configured": true, 00:11:46.151 "data_offset": 2048, 00:11:46.151 "data_size": 63488 00:11:46.151 }, 00:11:46.151 { 00:11:46.151 "name": "BaseBdev2", 00:11:46.151 "uuid": "b1400eff-6f17-5d2d-b7ae-4ef33603f48d", 00:11:46.151 "is_configured": true, 00:11:46.151 "data_offset": 2048, 00:11:46.151 "data_size": 63488 00:11:46.151 }, 00:11:46.151 { 00:11:46.151 "name": "BaseBdev3", 00:11:46.151 "uuid": "aef7cf56-d61f-5ee5-9d1d-aaafc0ac394a", 00:11:46.151 "is_configured": true, 00:11:46.151 "data_offset": 2048, 00:11:46.151 "data_size": 63488 00:11:46.151 }, 00:11:46.151 { 00:11:46.152 "name": "BaseBdev4", 00:11:46.152 "uuid": "ec3d04eb-fc8c-584d-af5c-493ac4a404a2", 00:11:46.152 "is_configured": true, 00:11:46.152 "data_offset": 2048, 00:11:46.152 "data_size": 63488 00:11:46.152 } 00:11:46.152 ] 00:11:46.152 }' 00:11:46.152 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.152 04:00:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.410 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:46.410 04:00:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:46.669 [2024-11-18 04:00:43.053009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:47.604 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:47.604 04:00:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.604 04:00:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.604 04:00:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.604 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:47.604 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:47.604 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.605 04:00:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.605 04:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.605 "name": "raid_bdev1", 00:11:47.605 "uuid": "fbaf65e6-e5c2-4857-a3b3-204440f31156", 00:11:47.605 "strip_size_kb": 64, 00:11:47.605 "state": "online", 00:11:47.605 "raid_level": "concat", 00:11:47.605 "superblock": true, 00:11:47.605 "num_base_bdevs": 4, 00:11:47.605 "num_base_bdevs_discovered": 4, 00:11:47.605 "num_base_bdevs_operational": 4, 00:11:47.605 "base_bdevs_list": [ 00:11:47.605 { 00:11:47.605 "name": "BaseBdev1", 00:11:47.605 "uuid": "7b3baedc-86e9-5ae6-82f5-b155183173f3", 00:11:47.605 "is_configured": true, 00:11:47.605 "data_offset": 2048, 00:11:47.605 "data_size": 63488 00:11:47.605 }, 00:11:47.605 { 00:11:47.605 "name": "BaseBdev2", 00:11:47.605 "uuid": "b1400eff-6f17-5d2d-b7ae-4ef33603f48d", 00:11:47.605 "is_configured": true, 00:11:47.605 "data_offset": 2048, 00:11:47.605 "data_size": 63488 00:11:47.605 }, 00:11:47.605 { 00:11:47.605 "name": "BaseBdev3", 00:11:47.605 "uuid": "aef7cf56-d61f-5ee5-9d1d-aaafc0ac394a", 00:11:47.605 "is_configured": true, 00:11:47.605 "data_offset": 2048, 00:11:47.605 "data_size": 63488 00:11:47.605 }, 00:11:47.605 { 00:11:47.605 "name": "BaseBdev4", 00:11:47.605 "uuid": "ec3d04eb-fc8c-584d-af5c-493ac4a404a2", 00:11:47.605 "is_configured": true, 00:11:47.605 "data_offset": 2048, 00:11:47.605 "data_size": 63488 00:11:47.605 } 00:11:47.605 ] 00:11:47.605 }' 00:11:47.605 04:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.605 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.864 04:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.864 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.864 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.864 [2024-11-18 04:00:44.397745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.864 [2024-11-18 04:00:44.397905] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.864 [2024-11-18 04:00:44.400515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.864 [2024-11-18 04:00:44.400619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.864 [2024-11-18 04:00:44.400686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.864 [2024-11-18 04:00:44.400737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:47.864 { 00:11:47.864 "results": [ 00:11:47.864 { 00:11:47.864 "job": "raid_bdev1", 00:11:47.864 "core_mask": "0x1", 00:11:47.864 "workload": "randrw", 00:11:47.864 "percentage": 50, 00:11:47.864 "status": "finished", 00:11:47.864 "queue_depth": 1, 00:11:47.864 "io_size": 131072, 00:11:47.864 "runtime": 1.345258, 00:11:47.864 "iops": 14006.978587007103, 00:11:47.864 "mibps": 1750.8723233758878, 00:11:47.864 "io_failed": 1, 00:11:47.864 "io_timeout": 0, 00:11:47.864 "avg_latency_us": 100.716366322803, 00:11:47.864 "min_latency_us": 25.152838427947597, 00:11:47.864 "max_latency_us": 1366.5257641921398 00:11:47.864 } 00:11:47.864 ], 00:11:47.864 "core_count": 1 00:11:47.864 } 00:11:47.864 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.864 04:00:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73025 00:11:47.864 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73025 ']' 00:11:47.864 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73025 00:11:47.864 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:47.864 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.864 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73025 00:11:47.864 killing process with pid 73025 00:11:47.864 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.865 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.865 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73025' 00:11:47.865 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73025 00:11:47.865 [2024-11-18 04:00:44.436489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.865 04:00:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73025 00:11:48.432 [2024-11-18 04:00:44.782007] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.809 04:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:49.809 04:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hlyeQBxrvV 00:11:49.809 04:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:49.809 04:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:49.809 04:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:49.809 04:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.809 04:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:49.809 04:00:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:49.809 00:11:49.809 real 0m4.729s 00:11:49.809 user 0m5.430s 00:11:49.809 sys 0m0.641s 00:11:49.809 04:00:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.809 04:00:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.809 ************************************ 00:11:49.809 END TEST raid_write_error_test 00:11:49.809 ************************************ 00:11:49.809 04:00:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:49.809 04:00:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:49.809 04:00:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:49.809 04:00:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.809 04:00:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.809 ************************************ 00:11:49.809 START TEST raid_state_function_test 00:11:49.809 ************************************ 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73169 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:49.809 Process raid pid: 73169 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73169' 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73169 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73169 ']' 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.809 04:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.809 [2024-11-18 04:00:46.191368] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:49.809 [2024-11-18 04:00:46.191511] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.809 [2024-11-18 04:00:46.369460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.069 [2024-11-18 04:00:46.509307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.330 [2024-11-18 04:00:46.748046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.330 [2024-11-18 04:00:46.748119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.595 [2024-11-18 04:00:47.024898] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.595 [2024-11-18 04:00:47.024972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.595 [2024-11-18 04:00:47.024982] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.595 [2024-11-18 04:00:47.024992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.595 [2024-11-18 04:00:47.024999] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.595 [2024-11-18 04:00:47.025008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.595 [2024-11-18 04:00:47.025014] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:50.595 [2024-11-18 04:00:47.025023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.595 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.596 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.596 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.596 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.596 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.596 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.596 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.596 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.596 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.596 "name": "Existed_Raid", 00:11:50.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.596 "strip_size_kb": 0, 00:11:50.596 "state": "configuring", 00:11:50.596 "raid_level": "raid1", 00:11:50.596 "superblock": false, 00:11:50.596 "num_base_bdevs": 4, 00:11:50.596 "num_base_bdevs_discovered": 0, 00:11:50.596 "num_base_bdevs_operational": 4, 00:11:50.596 "base_bdevs_list": [ 00:11:50.596 { 00:11:50.596 "name": "BaseBdev1", 00:11:50.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.596 "is_configured": false, 00:11:50.596 "data_offset": 0, 00:11:50.596 "data_size": 0 00:11:50.596 }, 00:11:50.596 { 00:11:50.596 "name": "BaseBdev2", 00:11:50.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.596 "is_configured": false, 00:11:50.596 "data_offset": 0, 00:11:50.596 "data_size": 0 00:11:50.596 }, 00:11:50.596 { 00:11:50.596 "name": "BaseBdev3", 00:11:50.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.596 "is_configured": false, 00:11:50.596 "data_offset": 0, 00:11:50.596 "data_size": 0 00:11:50.596 }, 00:11:50.596 { 00:11:50.596 "name": "BaseBdev4", 00:11:50.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.596 "is_configured": false, 00:11:50.596 "data_offset": 0, 00:11:50.596 "data_size": 0 00:11:50.596 } 00:11:50.596 ] 00:11:50.596 }' 00:11:50.596 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.596 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.864 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.864 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.864 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.864 [2024-11-18 04:00:47.380199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.864 [2024-11-18 04:00:47.380260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:50.864 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.864 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.864 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.864 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.864 [2024-11-18 04:00:47.392189] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.864 [2024-11-18 04:00:47.392270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.864 [2024-11-18 04:00:47.392279] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.864 [2024-11-18 04:00:47.392290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.864 [2024-11-18 04:00:47.392296] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.864 [2024-11-18 04:00:47.392306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.864 [2024-11-18 04:00:47.392312] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:50.864 [2024-11-18 04:00:47.392321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:50.864 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.865 [2024-11-18 04:00:47.445459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.865 BaseBdev1 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.865 [ 00:11:50.865 { 00:11:50.865 "name": "BaseBdev1", 00:11:50.865 "aliases": [ 00:11:50.865 "9ea3ead7-0590-485e-ba01-29d304557099" 00:11:50.865 ], 00:11:50.865 "product_name": "Malloc disk", 00:11:50.865 "block_size": 512, 00:11:50.865 "num_blocks": 65536, 00:11:50.865 "uuid": "9ea3ead7-0590-485e-ba01-29d304557099", 00:11:50.865 "assigned_rate_limits": { 00:11:50.865 "rw_ios_per_sec": 0, 00:11:50.865 "rw_mbytes_per_sec": 0, 00:11:50.865 "r_mbytes_per_sec": 0, 00:11:50.865 "w_mbytes_per_sec": 0 00:11:50.865 }, 00:11:50.865 "claimed": true, 00:11:50.865 "claim_type": "exclusive_write", 00:11:50.865 "zoned": false, 00:11:50.865 "supported_io_types": { 00:11:50.865 "read": true, 00:11:50.865 "write": true, 00:11:50.865 "unmap": true, 00:11:50.865 "flush": true, 00:11:50.865 "reset": true, 00:11:50.865 "nvme_admin": false, 00:11:50.865 "nvme_io": false, 00:11:50.865 "nvme_io_md": false, 00:11:50.865 "write_zeroes": true, 00:11:50.865 "zcopy": true, 00:11:50.865 "get_zone_info": false, 00:11:50.865 "zone_management": false, 00:11:50.865 "zone_append": false, 00:11:50.865 "compare": false, 00:11:50.865 "compare_and_write": false, 00:11:50.865 "abort": true, 00:11:50.865 "seek_hole": false, 00:11:50.865 "seek_data": false, 00:11:50.865 "copy": true, 00:11:50.865 "nvme_iov_md": false 00:11:50.865 }, 00:11:50.865 "memory_domains": [ 00:11:50.865 { 00:11:50.865 "dma_device_id": "system", 00:11:50.865 "dma_device_type": 1 00:11:50.865 }, 00:11:50.865 { 00:11:50.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.865 "dma_device_type": 2 00:11:50.865 } 00:11:50.865 ], 00:11:50.865 "driver_specific": {} 00:11:50.865 } 00:11:50.865 ] 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.865 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.124 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.124 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.124 "name": "Existed_Raid", 00:11:51.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.124 "strip_size_kb": 0, 00:11:51.124 "state": "configuring", 00:11:51.124 "raid_level": "raid1", 00:11:51.124 "superblock": false, 00:11:51.124 "num_base_bdevs": 4, 00:11:51.124 "num_base_bdevs_discovered": 1, 00:11:51.124 "num_base_bdevs_operational": 4, 00:11:51.124 "base_bdevs_list": [ 00:11:51.124 { 00:11:51.124 "name": "BaseBdev1", 00:11:51.124 "uuid": "9ea3ead7-0590-485e-ba01-29d304557099", 00:11:51.124 "is_configured": true, 00:11:51.124 "data_offset": 0, 00:11:51.124 "data_size": 65536 00:11:51.124 }, 00:11:51.124 { 00:11:51.124 "name": "BaseBdev2", 00:11:51.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.124 "is_configured": false, 00:11:51.124 "data_offset": 0, 00:11:51.124 "data_size": 0 00:11:51.124 }, 00:11:51.124 { 00:11:51.124 "name": "BaseBdev3", 00:11:51.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.124 "is_configured": false, 00:11:51.124 "data_offset": 0, 00:11:51.124 "data_size": 0 00:11:51.124 }, 00:11:51.124 { 00:11:51.124 "name": "BaseBdev4", 00:11:51.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.124 "is_configured": false, 00:11:51.124 "data_offset": 0, 00:11:51.124 "data_size": 0 00:11:51.124 } 00:11:51.124 ] 00:11:51.124 }' 00:11:51.124 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.124 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 [2024-11-18 04:00:47.916722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.384 [2024-11-18 04:00:47.916797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 [2024-11-18 04:00:47.928741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.384 [2024-11-18 04:00:47.930826] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.384 [2024-11-18 04:00:47.930879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.384 [2024-11-18 04:00:47.930889] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.384 [2024-11-18 04:00:47.930899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.384 [2024-11-18 04:00:47.930905] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:51.384 [2024-11-18 04:00:47.930913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.384 "name": "Existed_Raid", 00:11:51.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.384 "strip_size_kb": 0, 00:11:51.384 "state": "configuring", 00:11:51.384 "raid_level": "raid1", 00:11:51.384 "superblock": false, 00:11:51.384 "num_base_bdevs": 4, 00:11:51.384 "num_base_bdevs_discovered": 1, 00:11:51.384 "num_base_bdevs_operational": 4, 00:11:51.384 "base_bdevs_list": [ 00:11:51.384 { 00:11:51.384 "name": "BaseBdev1", 00:11:51.384 "uuid": "9ea3ead7-0590-485e-ba01-29d304557099", 00:11:51.384 "is_configured": true, 00:11:51.384 "data_offset": 0, 00:11:51.384 "data_size": 65536 00:11:51.384 }, 00:11:51.384 { 00:11:51.384 "name": "BaseBdev2", 00:11:51.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.384 "is_configured": false, 00:11:51.384 "data_offset": 0, 00:11:51.384 "data_size": 0 00:11:51.384 }, 00:11:51.384 { 00:11:51.384 "name": "BaseBdev3", 00:11:51.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.384 "is_configured": false, 00:11:51.384 "data_offset": 0, 00:11:51.384 "data_size": 0 00:11:51.384 }, 00:11:51.384 { 00:11:51.384 "name": "BaseBdev4", 00:11:51.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.384 "is_configured": false, 00:11:51.384 "data_offset": 0, 00:11:51.384 "data_size": 0 00:11:51.384 } 00:11:51.384 ] 00:11:51.384 }' 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.384 04:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.952 [2024-11-18 04:00:48.409544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.952 BaseBdev2 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.952 [ 00:11:51.952 { 00:11:51.952 "name": "BaseBdev2", 00:11:51.952 "aliases": [ 00:11:51.952 "86e031ea-ffce-457a-ab47-717441d9f8b1" 00:11:51.952 ], 00:11:51.952 "product_name": "Malloc disk", 00:11:51.952 "block_size": 512, 00:11:51.952 "num_blocks": 65536, 00:11:51.952 "uuid": "86e031ea-ffce-457a-ab47-717441d9f8b1", 00:11:51.952 "assigned_rate_limits": { 00:11:51.952 "rw_ios_per_sec": 0, 00:11:51.952 "rw_mbytes_per_sec": 0, 00:11:51.952 "r_mbytes_per_sec": 0, 00:11:51.952 "w_mbytes_per_sec": 0 00:11:51.952 }, 00:11:51.952 "claimed": true, 00:11:51.952 "claim_type": "exclusive_write", 00:11:51.952 "zoned": false, 00:11:51.952 "supported_io_types": { 00:11:51.952 "read": true, 00:11:51.952 "write": true, 00:11:51.952 "unmap": true, 00:11:51.952 "flush": true, 00:11:51.952 "reset": true, 00:11:51.952 "nvme_admin": false, 00:11:51.952 "nvme_io": false, 00:11:51.952 "nvme_io_md": false, 00:11:51.952 "write_zeroes": true, 00:11:51.952 "zcopy": true, 00:11:51.952 "get_zone_info": false, 00:11:51.952 "zone_management": false, 00:11:51.952 "zone_append": false, 00:11:51.952 "compare": false, 00:11:51.952 "compare_and_write": false, 00:11:51.952 "abort": true, 00:11:51.952 "seek_hole": false, 00:11:51.952 "seek_data": false, 00:11:51.952 "copy": true, 00:11:51.952 "nvme_iov_md": false 00:11:51.952 }, 00:11:51.952 "memory_domains": [ 00:11:51.952 { 00:11:51.952 "dma_device_id": "system", 00:11:51.952 "dma_device_type": 1 00:11:51.952 }, 00:11:51.952 { 00:11:51.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.952 "dma_device_type": 2 00:11:51.952 } 00:11:51.952 ], 00:11:51.952 "driver_specific": {} 00:11:51.952 } 00:11:51.952 ] 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.952 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.952 "name": "Existed_Raid", 00:11:51.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.952 "strip_size_kb": 0, 00:11:51.952 "state": "configuring", 00:11:51.952 "raid_level": "raid1", 00:11:51.952 "superblock": false, 00:11:51.952 "num_base_bdevs": 4, 00:11:51.952 "num_base_bdevs_discovered": 2, 00:11:51.952 "num_base_bdevs_operational": 4, 00:11:51.952 "base_bdevs_list": [ 00:11:51.952 { 00:11:51.952 "name": "BaseBdev1", 00:11:51.952 "uuid": "9ea3ead7-0590-485e-ba01-29d304557099", 00:11:51.952 "is_configured": true, 00:11:51.952 "data_offset": 0, 00:11:51.952 "data_size": 65536 00:11:51.952 }, 00:11:51.952 { 00:11:51.952 "name": "BaseBdev2", 00:11:51.952 "uuid": "86e031ea-ffce-457a-ab47-717441d9f8b1", 00:11:51.952 "is_configured": true, 00:11:51.952 "data_offset": 0, 00:11:51.952 "data_size": 65536 00:11:51.952 }, 00:11:51.952 { 00:11:51.952 "name": "BaseBdev3", 00:11:51.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.952 "is_configured": false, 00:11:51.952 "data_offset": 0, 00:11:51.952 "data_size": 0 00:11:51.952 }, 00:11:51.952 { 00:11:51.952 "name": "BaseBdev4", 00:11:51.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.952 "is_configured": false, 00:11:51.953 "data_offset": 0, 00:11:51.953 "data_size": 0 00:11:51.953 } 00:11:51.953 ] 00:11:51.953 }' 00:11:51.953 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.953 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.521 [2024-11-18 04:00:48.933579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.521 BaseBdev3 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.521 [ 00:11:52.521 { 00:11:52.521 "name": "BaseBdev3", 00:11:52.521 "aliases": [ 00:11:52.521 "c4739485-f0dd-4a0d-a970-d89d756dd390" 00:11:52.521 ], 00:11:52.521 "product_name": "Malloc disk", 00:11:52.521 "block_size": 512, 00:11:52.521 "num_blocks": 65536, 00:11:52.521 "uuid": "c4739485-f0dd-4a0d-a970-d89d756dd390", 00:11:52.521 "assigned_rate_limits": { 00:11:52.521 "rw_ios_per_sec": 0, 00:11:52.521 "rw_mbytes_per_sec": 0, 00:11:52.521 "r_mbytes_per_sec": 0, 00:11:52.521 "w_mbytes_per_sec": 0 00:11:52.521 }, 00:11:52.521 "claimed": true, 00:11:52.521 "claim_type": "exclusive_write", 00:11:52.521 "zoned": false, 00:11:52.521 "supported_io_types": { 00:11:52.521 "read": true, 00:11:52.521 "write": true, 00:11:52.521 "unmap": true, 00:11:52.521 "flush": true, 00:11:52.521 "reset": true, 00:11:52.521 "nvme_admin": false, 00:11:52.521 "nvme_io": false, 00:11:52.521 "nvme_io_md": false, 00:11:52.521 "write_zeroes": true, 00:11:52.521 "zcopy": true, 00:11:52.521 "get_zone_info": false, 00:11:52.521 "zone_management": false, 00:11:52.521 "zone_append": false, 00:11:52.521 "compare": false, 00:11:52.521 "compare_and_write": false, 00:11:52.521 "abort": true, 00:11:52.521 "seek_hole": false, 00:11:52.521 "seek_data": false, 00:11:52.521 "copy": true, 00:11:52.521 "nvme_iov_md": false 00:11:52.521 }, 00:11:52.521 "memory_domains": [ 00:11:52.521 { 00:11:52.521 "dma_device_id": "system", 00:11:52.521 "dma_device_type": 1 00:11:52.521 }, 00:11:52.521 { 00:11:52.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.521 "dma_device_type": 2 00:11:52.521 } 00:11:52.521 ], 00:11:52.521 "driver_specific": {} 00:11:52.521 } 00:11:52.521 ] 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.521 04:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.521 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.521 "name": "Existed_Raid", 00:11:52.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.521 "strip_size_kb": 0, 00:11:52.521 "state": "configuring", 00:11:52.521 "raid_level": "raid1", 00:11:52.521 "superblock": false, 00:11:52.521 "num_base_bdevs": 4, 00:11:52.521 "num_base_bdevs_discovered": 3, 00:11:52.521 "num_base_bdevs_operational": 4, 00:11:52.521 "base_bdevs_list": [ 00:11:52.521 { 00:11:52.521 "name": "BaseBdev1", 00:11:52.521 "uuid": "9ea3ead7-0590-485e-ba01-29d304557099", 00:11:52.521 "is_configured": true, 00:11:52.521 "data_offset": 0, 00:11:52.521 "data_size": 65536 00:11:52.521 }, 00:11:52.521 { 00:11:52.521 "name": "BaseBdev2", 00:11:52.521 "uuid": "86e031ea-ffce-457a-ab47-717441d9f8b1", 00:11:52.521 "is_configured": true, 00:11:52.521 "data_offset": 0, 00:11:52.521 "data_size": 65536 00:11:52.521 }, 00:11:52.521 { 00:11:52.521 "name": "BaseBdev3", 00:11:52.521 "uuid": "c4739485-f0dd-4a0d-a970-d89d756dd390", 00:11:52.521 "is_configured": true, 00:11:52.521 "data_offset": 0, 00:11:52.521 "data_size": 65536 00:11:52.521 }, 00:11:52.521 { 00:11:52.521 "name": "BaseBdev4", 00:11:52.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.521 "is_configured": false, 00:11:52.521 "data_offset": 0, 00:11:52.521 "data_size": 0 00:11:52.521 } 00:11:52.521 ] 00:11:52.521 }' 00:11:52.521 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.521 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.779 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:52.779 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.779 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.038 [2024-11-18 04:00:49.443345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:53.038 [2024-11-18 04:00:49.443413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:53.038 [2024-11-18 04:00:49.443422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:53.038 [2024-11-18 04:00:49.443750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:53.038 [2024-11-18 04:00:49.443966] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:53.038 [2024-11-18 04:00:49.443988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:53.038 [2024-11-18 04:00:49.444286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.038 BaseBdev4 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.038 [ 00:11:53.038 { 00:11:53.038 "name": "BaseBdev4", 00:11:53.038 "aliases": [ 00:11:53.038 "3469f407-420b-45ae-88c9-75ff7b0ef4f1" 00:11:53.038 ], 00:11:53.038 "product_name": "Malloc disk", 00:11:53.038 "block_size": 512, 00:11:53.038 "num_blocks": 65536, 00:11:53.038 "uuid": "3469f407-420b-45ae-88c9-75ff7b0ef4f1", 00:11:53.038 "assigned_rate_limits": { 00:11:53.038 "rw_ios_per_sec": 0, 00:11:53.038 "rw_mbytes_per_sec": 0, 00:11:53.038 "r_mbytes_per_sec": 0, 00:11:53.038 "w_mbytes_per_sec": 0 00:11:53.038 }, 00:11:53.038 "claimed": true, 00:11:53.038 "claim_type": "exclusive_write", 00:11:53.038 "zoned": false, 00:11:53.038 "supported_io_types": { 00:11:53.038 "read": true, 00:11:53.038 "write": true, 00:11:53.038 "unmap": true, 00:11:53.038 "flush": true, 00:11:53.038 "reset": true, 00:11:53.038 "nvme_admin": false, 00:11:53.038 "nvme_io": false, 00:11:53.038 "nvme_io_md": false, 00:11:53.038 "write_zeroes": true, 00:11:53.038 "zcopy": true, 00:11:53.038 "get_zone_info": false, 00:11:53.038 "zone_management": false, 00:11:53.038 "zone_append": false, 00:11:53.038 "compare": false, 00:11:53.038 "compare_and_write": false, 00:11:53.038 "abort": true, 00:11:53.038 "seek_hole": false, 00:11:53.038 "seek_data": false, 00:11:53.038 "copy": true, 00:11:53.038 "nvme_iov_md": false 00:11:53.038 }, 00:11:53.038 "memory_domains": [ 00:11:53.038 { 00:11:53.038 "dma_device_id": "system", 00:11:53.038 "dma_device_type": 1 00:11:53.038 }, 00:11:53.038 { 00:11:53.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.038 "dma_device_type": 2 00:11:53.038 } 00:11:53.038 ], 00:11:53.038 "driver_specific": {} 00:11:53.038 } 00:11:53.038 ] 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.038 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.039 "name": "Existed_Raid", 00:11:53.039 "uuid": "4f014884-c08e-40f9-a9b5-2b5e126082bc", 00:11:53.039 "strip_size_kb": 0, 00:11:53.039 "state": "online", 00:11:53.039 "raid_level": "raid1", 00:11:53.039 "superblock": false, 00:11:53.039 "num_base_bdevs": 4, 00:11:53.039 "num_base_bdevs_discovered": 4, 00:11:53.039 "num_base_bdevs_operational": 4, 00:11:53.039 "base_bdevs_list": [ 00:11:53.039 { 00:11:53.039 "name": "BaseBdev1", 00:11:53.039 "uuid": "9ea3ead7-0590-485e-ba01-29d304557099", 00:11:53.039 "is_configured": true, 00:11:53.039 "data_offset": 0, 00:11:53.039 "data_size": 65536 00:11:53.039 }, 00:11:53.039 { 00:11:53.039 "name": "BaseBdev2", 00:11:53.039 "uuid": "86e031ea-ffce-457a-ab47-717441d9f8b1", 00:11:53.039 "is_configured": true, 00:11:53.039 "data_offset": 0, 00:11:53.039 "data_size": 65536 00:11:53.039 }, 00:11:53.039 { 00:11:53.039 "name": "BaseBdev3", 00:11:53.039 "uuid": "c4739485-f0dd-4a0d-a970-d89d756dd390", 00:11:53.039 "is_configured": true, 00:11:53.039 "data_offset": 0, 00:11:53.039 "data_size": 65536 00:11:53.039 }, 00:11:53.039 { 00:11:53.039 "name": "BaseBdev4", 00:11:53.039 "uuid": "3469f407-420b-45ae-88c9-75ff7b0ef4f1", 00:11:53.039 "is_configured": true, 00:11:53.039 "data_offset": 0, 00:11:53.039 "data_size": 65536 00:11:53.039 } 00:11:53.039 ] 00:11:53.039 }' 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.039 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.298 [2024-11-18 04:00:49.879066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.298 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.298 "name": "Existed_Raid", 00:11:53.298 "aliases": [ 00:11:53.298 "4f014884-c08e-40f9-a9b5-2b5e126082bc" 00:11:53.298 ], 00:11:53.298 "product_name": "Raid Volume", 00:11:53.298 "block_size": 512, 00:11:53.298 "num_blocks": 65536, 00:11:53.298 "uuid": "4f014884-c08e-40f9-a9b5-2b5e126082bc", 00:11:53.298 "assigned_rate_limits": { 00:11:53.298 "rw_ios_per_sec": 0, 00:11:53.298 "rw_mbytes_per_sec": 0, 00:11:53.298 "r_mbytes_per_sec": 0, 00:11:53.298 "w_mbytes_per_sec": 0 00:11:53.298 }, 00:11:53.298 "claimed": false, 00:11:53.298 "zoned": false, 00:11:53.298 "supported_io_types": { 00:11:53.298 "read": true, 00:11:53.298 "write": true, 00:11:53.298 "unmap": false, 00:11:53.298 "flush": false, 00:11:53.298 "reset": true, 00:11:53.298 "nvme_admin": false, 00:11:53.298 "nvme_io": false, 00:11:53.298 "nvme_io_md": false, 00:11:53.298 "write_zeroes": true, 00:11:53.298 "zcopy": false, 00:11:53.298 "get_zone_info": false, 00:11:53.298 "zone_management": false, 00:11:53.298 "zone_append": false, 00:11:53.298 "compare": false, 00:11:53.298 "compare_and_write": false, 00:11:53.298 "abort": false, 00:11:53.298 "seek_hole": false, 00:11:53.298 "seek_data": false, 00:11:53.298 "copy": false, 00:11:53.298 "nvme_iov_md": false 00:11:53.298 }, 00:11:53.298 "memory_domains": [ 00:11:53.298 { 00:11:53.298 "dma_device_id": "system", 00:11:53.298 "dma_device_type": 1 00:11:53.298 }, 00:11:53.299 { 00:11:53.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.299 "dma_device_type": 2 00:11:53.299 }, 00:11:53.299 { 00:11:53.299 "dma_device_id": "system", 00:11:53.299 "dma_device_type": 1 00:11:53.299 }, 00:11:53.299 { 00:11:53.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.299 "dma_device_type": 2 00:11:53.299 }, 00:11:53.299 { 00:11:53.299 "dma_device_id": "system", 00:11:53.299 "dma_device_type": 1 00:11:53.299 }, 00:11:53.299 { 00:11:53.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.299 "dma_device_type": 2 00:11:53.299 }, 00:11:53.299 { 00:11:53.299 "dma_device_id": "system", 00:11:53.299 "dma_device_type": 1 00:11:53.299 }, 00:11:53.299 { 00:11:53.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.299 "dma_device_type": 2 00:11:53.299 } 00:11:53.299 ], 00:11:53.299 "driver_specific": { 00:11:53.299 "raid": { 00:11:53.299 "uuid": "4f014884-c08e-40f9-a9b5-2b5e126082bc", 00:11:53.299 "strip_size_kb": 0, 00:11:53.299 "state": "online", 00:11:53.299 "raid_level": "raid1", 00:11:53.299 "superblock": false, 00:11:53.299 "num_base_bdevs": 4, 00:11:53.299 "num_base_bdevs_discovered": 4, 00:11:53.299 "num_base_bdevs_operational": 4, 00:11:53.299 "base_bdevs_list": [ 00:11:53.299 { 00:11:53.299 "name": "BaseBdev1", 00:11:53.299 "uuid": "9ea3ead7-0590-485e-ba01-29d304557099", 00:11:53.299 "is_configured": true, 00:11:53.299 "data_offset": 0, 00:11:53.299 "data_size": 65536 00:11:53.299 }, 00:11:53.299 { 00:11:53.299 "name": "BaseBdev2", 00:11:53.299 "uuid": "86e031ea-ffce-457a-ab47-717441d9f8b1", 00:11:53.299 "is_configured": true, 00:11:53.299 "data_offset": 0, 00:11:53.299 "data_size": 65536 00:11:53.299 }, 00:11:53.299 { 00:11:53.299 "name": "BaseBdev3", 00:11:53.299 "uuid": "c4739485-f0dd-4a0d-a970-d89d756dd390", 00:11:53.299 "is_configured": true, 00:11:53.299 "data_offset": 0, 00:11:53.299 "data_size": 65536 00:11:53.299 }, 00:11:53.299 { 00:11:53.299 "name": "BaseBdev4", 00:11:53.299 "uuid": "3469f407-420b-45ae-88c9-75ff7b0ef4f1", 00:11:53.299 "is_configured": true, 00:11:53.299 "data_offset": 0, 00:11:53.299 "data_size": 65536 00:11:53.299 } 00:11:53.299 ] 00:11:53.299 } 00:11:53.299 } 00:11:53.299 }' 00:11:53.299 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.557 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:53.557 BaseBdev2 00:11:53.557 BaseBdev3 00:11:53.557 BaseBdev4' 00:11:53.557 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.557 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.557 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.557 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:53.557 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.557 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.557 04:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.557 04:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.558 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.558 [2024-11-18 04:00:50.154225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.816 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.817 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.817 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.817 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.817 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.817 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.817 "name": "Existed_Raid", 00:11:53.817 "uuid": "4f014884-c08e-40f9-a9b5-2b5e126082bc", 00:11:53.817 "strip_size_kb": 0, 00:11:53.817 "state": "online", 00:11:53.817 "raid_level": "raid1", 00:11:53.817 "superblock": false, 00:11:53.817 "num_base_bdevs": 4, 00:11:53.817 "num_base_bdevs_discovered": 3, 00:11:53.817 "num_base_bdevs_operational": 3, 00:11:53.817 "base_bdevs_list": [ 00:11:53.817 { 00:11:53.817 "name": null, 00:11:53.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.817 "is_configured": false, 00:11:53.817 "data_offset": 0, 00:11:53.817 "data_size": 65536 00:11:53.817 }, 00:11:53.817 { 00:11:53.817 "name": "BaseBdev2", 00:11:53.817 "uuid": "86e031ea-ffce-457a-ab47-717441d9f8b1", 00:11:53.817 "is_configured": true, 00:11:53.817 "data_offset": 0, 00:11:53.817 "data_size": 65536 00:11:53.817 }, 00:11:53.817 { 00:11:53.817 "name": "BaseBdev3", 00:11:53.817 "uuid": "c4739485-f0dd-4a0d-a970-d89d756dd390", 00:11:53.817 "is_configured": true, 00:11:53.817 "data_offset": 0, 00:11:53.817 "data_size": 65536 00:11:53.817 }, 00:11:53.817 { 00:11:53.817 "name": "BaseBdev4", 00:11:53.817 "uuid": "3469f407-420b-45ae-88c9-75ff7b0ef4f1", 00:11:53.817 "is_configured": true, 00:11:53.817 "data_offset": 0, 00:11:53.817 "data_size": 65536 00:11:53.817 } 00:11:53.817 ] 00:11:53.817 }' 00:11:53.817 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.817 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.076 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:54.076 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.076 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.076 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.076 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.076 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.335 [2024-11-18 04:00:50.747607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.335 04:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.335 [2024-11-18 04:00:50.913014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.596 [2024-11-18 04:00:51.073495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:54.596 [2024-11-18 04:00:51.073620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.596 [2024-11-18 04:00:51.181786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.596 [2024-11-18 04:00:51.181851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.596 [2024-11-18 04:00:51.181866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.596 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.857 BaseBdev2 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.857 [ 00:11:54.857 { 00:11:54.857 "name": "BaseBdev2", 00:11:54.857 "aliases": [ 00:11:54.857 "d981d77c-c275-467b-93c3-1ee29032f414" 00:11:54.857 ], 00:11:54.857 "product_name": "Malloc disk", 00:11:54.857 "block_size": 512, 00:11:54.857 "num_blocks": 65536, 00:11:54.857 "uuid": "d981d77c-c275-467b-93c3-1ee29032f414", 00:11:54.857 "assigned_rate_limits": { 00:11:54.857 "rw_ios_per_sec": 0, 00:11:54.857 "rw_mbytes_per_sec": 0, 00:11:54.857 "r_mbytes_per_sec": 0, 00:11:54.857 "w_mbytes_per_sec": 0 00:11:54.857 }, 00:11:54.857 "claimed": false, 00:11:54.857 "zoned": false, 00:11:54.857 "supported_io_types": { 00:11:54.857 "read": true, 00:11:54.857 "write": true, 00:11:54.857 "unmap": true, 00:11:54.857 "flush": true, 00:11:54.857 "reset": true, 00:11:54.857 "nvme_admin": false, 00:11:54.857 "nvme_io": false, 00:11:54.857 "nvme_io_md": false, 00:11:54.857 "write_zeroes": true, 00:11:54.857 "zcopy": true, 00:11:54.857 "get_zone_info": false, 00:11:54.857 "zone_management": false, 00:11:54.857 "zone_append": false, 00:11:54.857 "compare": false, 00:11:54.857 "compare_and_write": false, 00:11:54.857 "abort": true, 00:11:54.857 "seek_hole": false, 00:11:54.857 "seek_data": false, 00:11:54.857 "copy": true, 00:11:54.857 "nvme_iov_md": false 00:11:54.857 }, 00:11:54.857 "memory_domains": [ 00:11:54.857 { 00:11:54.857 "dma_device_id": "system", 00:11:54.857 "dma_device_type": 1 00:11:54.857 }, 00:11:54.857 { 00:11:54.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.857 "dma_device_type": 2 00:11:54.857 } 00:11:54.857 ], 00:11:54.857 "driver_specific": {} 00:11:54.857 } 00:11:54.857 ] 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.857 BaseBdev3 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.857 [ 00:11:54.857 { 00:11:54.857 "name": "BaseBdev3", 00:11:54.857 "aliases": [ 00:11:54.857 "ef885126-3aef-4836-9366-b19706a7ab30" 00:11:54.857 ], 00:11:54.857 "product_name": "Malloc disk", 00:11:54.857 "block_size": 512, 00:11:54.857 "num_blocks": 65536, 00:11:54.857 "uuid": "ef885126-3aef-4836-9366-b19706a7ab30", 00:11:54.857 "assigned_rate_limits": { 00:11:54.857 "rw_ios_per_sec": 0, 00:11:54.857 "rw_mbytes_per_sec": 0, 00:11:54.857 "r_mbytes_per_sec": 0, 00:11:54.857 "w_mbytes_per_sec": 0 00:11:54.857 }, 00:11:54.857 "claimed": false, 00:11:54.857 "zoned": false, 00:11:54.857 "supported_io_types": { 00:11:54.857 "read": true, 00:11:54.857 "write": true, 00:11:54.857 "unmap": true, 00:11:54.857 "flush": true, 00:11:54.857 "reset": true, 00:11:54.857 "nvme_admin": false, 00:11:54.857 "nvme_io": false, 00:11:54.857 "nvme_io_md": false, 00:11:54.857 "write_zeroes": true, 00:11:54.857 "zcopy": true, 00:11:54.857 "get_zone_info": false, 00:11:54.857 "zone_management": false, 00:11:54.857 "zone_append": false, 00:11:54.857 "compare": false, 00:11:54.857 "compare_and_write": false, 00:11:54.857 "abort": true, 00:11:54.857 "seek_hole": false, 00:11:54.857 "seek_data": false, 00:11:54.857 "copy": true, 00:11:54.857 "nvme_iov_md": false 00:11:54.857 }, 00:11:54.857 "memory_domains": [ 00:11:54.857 { 00:11:54.857 "dma_device_id": "system", 00:11:54.857 "dma_device_type": 1 00:11:54.857 }, 00:11:54.857 { 00:11:54.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.857 "dma_device_type": 2 00:11:54.857 } 00:11:54.857 ], 00:11:54.857 "driver_specific": {} 00:11:54.857 } 00:11:54.857 ] 00:11:54.857 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 BaseBdev4 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.858 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 [ 00:11:54.858 { 00:11:54.858 "name": "BaseBdev4", 00:11:54.858 "aliases": [ 00:11:54.858 "c4b080a1-6146-456f-a34c-d266da3bd4cd" 00:11:54.858 ], 00:11:54.858 "product_name": "Malloc disk", 00:11:54.858 "block_size": 512, 00:11:54.858 "num_blocks": 65536, 00:11:54.858 "uuid": "c4b080a1-6146-456f-a34c-d266da3bd4cd", 00:11:54.858 "assigned_rate_limits": { 00:11:54.858 "rw_ios_per_sec": 0, 00:11:54.858 "rw_mbytes_per_sec": 0, 00:11:54.858 "r_mbytes_per_sec": 0, 00:11:54.858 "w_mbytes_per_sec": 0 00:11:54.858 }, 00:11:54.858 "claimed": false, 00:11:54.858 "zoned": false, 00:11:54.858 "supported_io_types": { 00:11:54.858 "read": true, 00:11:54.858 "write": true, 00:11:55.117 "unmap": true, 00:11:55.117 "flush": true, 00:11:55.117 "reset": true, 00:11:55.117 "nvme_admin": false, 00:11:55.117 "nvme_io": false, 00:11:55.117 "nvme_io_md": false, 00:11:55.117 "write_zeroes": true, 00:11:55.117 "zcopy": true, 00:11:55.117 "get_zone_info": false, 00:11:55.117 "zone_management": false, 00:11:55.117 "zone_append": false, 00:11:55.117 "compare": false, 00:11:55.117 "compare_and_write": false, 00:11:55.117 "abort": true, 00:11:55.117 "seek_hole": false, 00:11:55.117 "seek_data": false, 00:11:55.117 "copy": true, 00:11:55.117 "nvme_iov_md": false 00:11:55.117 }, 00:11:55.117 "memory_domains": [ 00:11:55.117 { 00:11:55.117 "dma_device_id": "system", 00:11:55.117 "dma_device_type": 1 00:11:55.117 }, 00:11:55.117 { 00:11:55.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.117 "dma_device_type": 2 00:11:55.117 } 00:11:55.117 ], 00:11:55.117 "driver_specific": {} 00:11:55.117 } 00:11:55.117 ] 00:11:55.117 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.117 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.117 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 [2024-11-18 04:00:51.509770] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.118 [2024-11-18 04:00:51.509922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.118 [2024-11-18 04:00:51.509962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.118 [2024-11-18 04:00:51.512074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.118 [2024-11-18 04:00:51.512165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.118 "name": "Existed_Raid", 00:11:55.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.118 "strip_size_kb": 0, 00:11:55.118 "state": "configuring", 00:11:55.118 "raid_level": "raid1", 00:11:55.118 "superblock": false, 00:11:55.118 "num_base_bdevs": 4, 00:11:55.118 "num_base_bdevs_discovered": 3, 00:11:55.118 "num_base_bdevs_operational": 4, 00:11:55.118 "base_bdevs_list": [ 00:11:55.118 { 00:11:55.118 "name": "BaseBdev1", 00:11:55.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.118 "is_configured": false, 00:11:55.118 "data_offset": 0, 00:11:55.118 "data_size": 0 00:11:55.118 }, 00:11:55.118 { 00:11:55.118 "name": "BaseBdev2", 00:11:55.118 "uuid": "d981d77c-c275-467b-93c3-1ee29032f414", 00:11:55.118 "is_configured": true, 00:11:55.118 "data_offset": 0, 00:11:55.118 "data_size": 65536 00:11:55.118 }, 00:11:55.118 { 00:11:55.118 "name": "BaseBdev3", 00:11:55.118 "uuid": "ef885126-3aef-4836-9366-b19706a7ab30", 00:11:55.118 "is_configured": true, 00:11:55.118 "data_offset": 0, 00:11:55.118 "data_size": 65536 00:11:55.118 }, 00:11:55.118 { 00:11:55.118 "name": "BaseBdev4", 00:11:55.118 "uuid": "c4b080a1-6146-456f-a34c-d266da3bd4cd", 00:11:55.118 "is_configured": true, 00:11:55.118 "data_offset": 0, 00:11:55.118 "data_size": 65536 00:11:55.118 } 00:11:55.118 ] 00:11:55.118 }' 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.118 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.377 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:55.377 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.378 [2024-11-18 04:00:51.953093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.378 04:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.378 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.378 "name": "Existed_Raid", 00:11:55.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.378 "strip_size_kb": 0, 00:11:55.378 "state": "configuring", 00:11:55.378 "raid_level": "raid1", 00:11:55.378 "superblock": false, 00:11:55.378 "num_base_bdevs": 4, 00:11:55.378 "num_base_bdevs_discovered": 2, 00:11:55.378 "num_base_bdevs_operational": 4, 00:11:55.378 "base_bdevs_list": [ 00:11:55.378 { 00:11:55.378 "name": "BaseBdev1", 00:11:55.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.378 "is_configured": false, 00:11:55.378 "data_offset": 0, 00:11:55.378 "data_size": 0 00:11:55.378 }, 00:11:55.378 { 00:11:55.378 "name": null, 00:11:55.378 "uuid": "d981d77c-c275-467b-93c3-1ee29032f414", 00:11:55.378 "is_configured": false, 00:11:55.378 "data_offset": 0, 00:11:55.378 "data_size": 65536 00:11:55.378 }, 00:11:55.378 { 00:11:55.378 "name": "BaseBdev3", 00:11:55.378 "uuid": "ef885126-3aef-4836-9366-b19706a7ab30", 00:11:55.378 "is_configured": true, 00:11:55.378 "data_offset": 0, 00:11:55.378 "data_size": 65536 00:11:55.378 }, 00:11:55.378 { 00:11:55.378 "name": "BaseBdev4", 00:11:55.378 "uuid": "c4b080a1-6146-456f-a34c-d266da3bd4cd", 00:11:55.378 "is_configured": true, 00:11:55.378 "data_offset": 0, 00:11:55.378 "data_size": 65536 00:11:55.378 } 00:11:55.378 ] 00:11:55.378 }' 00:11:55.378 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.378 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.947 [2024-11-18 04:00:52.494719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.947 BaseBdev1 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.947 [ 00:11:55.947 { 00:11:55.947 "name": "BaseBdev1", 00:11:55.947 "aliases": [ 00:11:55.947 "935038fd-0b18-45bc-8a97-bff267d07bee" 00:11:55.947 ], 00:11:55.947 "product_name": "Malloc disk", 00:11:55.947 "block_size": 512, 00:11:55.947 "num_blocks": 65536, 00:11:55.947 "uuid": "935038fd-0b18-45bc-8a97-bff267d07bee", 00:11:55.947 "assigned_rate_limits": { 00:11:55.947 "rw_ios_per_sec": 0, 00:11:55.947 "rw_mbytes_per_sec": 0, 00:11:55.947 "r_mbytes_per_sec": 0, 00:11:55.947 "w_mbytes_per_sec": 0 00:11:55.947 }, 00:11:55.947 "claimed": true, 00:11:55.947 "claim_type": "exclusive_write", 00:11:55.947 "zoned": false, 00:11:55.947 "supported_io_types": { 00:11:55.947 "read": true, 00:11:55.947 "write": true, 00:11:55.947 "unmap": true, 00:11:55.947 "flush": true, 00:11:55.947 "reset": true, 00:11:55.947 "nvme_admin": false, 00:11:55.947 "nvme_io": false, 00:11:55.947 "nvme_io_md": false, 00:11:55.947 "write_zeroes": true, 00:11:55.947 "zcopy": true, 00:11:55.947 "get_zone_info": false, 00:11:55.947 "zone_management": false, 00:11:55.947 "zone_append": false, 00:11:55.947 "compare": false, 00:11:55.947 "compare_and_write": false, 00:11:55.947 "abort": true, 00:11:55.947 "seek_hole": false, 00:11:55.947 "seek_data": false, 00:11:55.947 "copy": true, 00:11:55.947 "nvme_iov_md": false 00:11:55.947 }, 00:11:55.947 "memory_domains": [ 00:11:55.947 { 00:11:55.947 "dma_device_id": "system", 00:11:55.947 "dma_device_type": 1 00:11:55.947 }, 00:11:55.947 { 00:11:55.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.947 "dma_device_type": 2 00:11:55.947 } 00:11:55.947 ], 00:11:55.947 "driver_specific": {} 00:11:55.947 } 00:11:55.947 ] 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.947 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.208 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.208 "name": "Existed_Raid", 00:11:56.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.208 "strip_size_kb": 0, 00:11:56.208 "state": "configuring", 00:11:56.208 "raid_level": "raid1", 00:11:56.208 "superblock": false, 00:11:56.208 "num_base_bdevs": 4, 00:11:56.208 "num_base_bdevs_discovered": 3, 00:11:56.208 "num_base_bdevs_operational": 4, 00:11:56.208 "base_bdevs_list": [ 00:11:56.208 { 00:11:56.208 "name": "BaseBdev1", 00:11:56.208 "uuid": "935038fd-0b18-45bc-8a97-bff267d07bee", 00:11:56.208 "is_configured": true, 00:11:56.208 "data_offset": 0, 00:11:56.208 "data_size": 65536 00:11:56.208 }, 00:11:56.208 { 00:11:56.208 "name": null, 00:11:56.208 "uuid": "d981d77c-c275-467b-93c3-1ee29032f414", 00:11:56.208 "is_configured": false, 00:11:56.208 "data_offset": 0, 00:11:56.208 "data_size": 65536 00:11:56.208 }, 00:11:56.208 { 00:11:56.208 "name": "BaseBdev3", 00:11:56.208 "uuid": "ef885126-3aef-4836-9366-b19706a7ab30", 00:11:56.208 "is_configured": true, 00:11:56.208 "data_offset": 0, 00:11:56.208 "data_size": 65536 00:11:56.208 }, 00:11:56.208 { 00:11:56.208 "name": "BaseBdev4", 00:11:56.208 "uuid": "c4b080a1-6146-456f-a34c-d266da3bd4cd", 00:11:56.208 "is_configured": true, 00:11:56.208 "data_offset": 0, 00:11:56.208 "data_size": 65536 00:11:56.208 } 00:11:56.208 ] 00:11:56.208 }' 00:11:56.208 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.208 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.468 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.468 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.468 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.468 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:56.468 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.468 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:56.468 04:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:56.468 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.468 04:00:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.468 [2024-11-18 04:00:52.997975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.468 "name": "Existed_Raid", 00:11:56.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.468 "strip_size_kb": 0, 00:11:56.468 "state": "configuring", 00:11:56.468 "raid_level": "raid1", 00:11:56.468 "superblock": false, 00:11:56.468 "num_base_bdevs": 4, 00:11:56.468 "num_base_bdevs_discovered": 2, 00:11:56.468 "num_base_bdevs_operational": 4, 00:11:56.468 "base_bdevs_list": [ 00:11:56.468 { 00:11:56.468 "name": "BaseBdev1", 00:11:56.468 "uuid": "935038fd-0b18-45bc-8a97-bff267d07bee", 00:11:56.468 "is_configured": true, 00:11:56.468 "data_offset": 0, 00:11:56.468 "data_size": 65536 00:11:56.468 }, 00:11:56.468 { 00:11:56.468 "name": null, 00:11:56.468 "uuid": "d981d77c-c275-467b-93c3-1ee29032f414", 00:11:56.468 "is_configured": false, 00:11:56.468 "data_offset": 0, 00:11:56.468 "data_size": 65536 00:11:56.468 }, 00:11:56.468 { 00:11:56.468 "name": null, 00:11:56.468 "uuid": "ef885126-3aef-4836-9366-b19706a7ab30", 00:11:56.468 "is_configured": false, 00:11:56.468 "data_offset": 0, 00:11:56.468 "data_size": 65536 00:11:56.468 }, 00:11:56.468 { 00:11:56.468 "name": "BaseBdev4", 00:11:56.468 "uuid": "c4b080a1-6146-456f-a34c-d266da3bd4cd", 00:11:56.468 "is_configured": true, 00:11:56.468 "data_offset": 0, 00:11:56.468 "data_size": 65536 00:11:56.468 } 00:11:56.468 ] 00:11:56.468 }' 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.468 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.039 [2024-11-18 04:00:53.509051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.039 "name": "Existed_Raid", 00:11:57.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.039 "strip_size_kb": 0, 00:11:57.039 "state": "configuring", 00:11:57.039 "raid_level": "raid1", 00:11:57.039 "superblock": false, 00:11:57.039 "num_base_bdevs": 4, 00:11:57.039 "num_base_bdevs_discovered": 3, 00:11:57.039 "num_base_bdevs_operational": 4, 00:11:57.039 "base_bdevs_list": [ 00:11:57.039 { 00:11:57.039 "name": "BaseBdev1", 00:11:57.039 "uuid": "935038fd-0b18-45bc-8a97-bff267d07bee", 00:11:57.039 "is_configured": true, 00:11:57.039 "data_offset": 0, 00:11:57.039 "data_size": 65536 00:11:57.039 }, 00:11:57.039 { 00:11:57.039 "name": null, 00:11:57.039 "uuid": "d981d77c-c275-467b-93c3-1ee29032f414", 00:11:57.039 "is_configured": false, 00:11:57.039 "data_offset": 0, 00:11:57.039 "data_size": 65536 00:11:57.039 }, 00:11:57.039 { 00:11:57.039 "name": "BaseBdev3", 00:11:57.039 "uuid": "ef885126-3aef-4836-9366-b19706a7ab30", 00:11:57.039 "is_configured": true, 00:11:57.039 "data_offset": 0, 00:11:57.039 "data_size": 65536 00:11:57.039 }, 00:11:57.039 { 00:11:57.039 "name": "BaseBdev4", 00:11:57.039 "uuid": "c4b080a1-6146-456f-a34c-d266da3bd4cd", 00:11:57.039 "is_configured": true, 00:11:57.039 "data_offset": 0, 00:11:57.039 "data_size": 65536 00:11:57.039 } 00:11:57.039 ] 00:11:57.039 }' 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.039 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.609 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:57.609 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.609 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.609 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.609 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.609 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:57.609 04:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.609 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.609 04:00:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.609 [2024-11-18 04:00:53.988337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.609 "name": "Existed_Raid", 00:11:57.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.609 "strip_size_kb": 0, 00:11:57.609 "state": "configuring", 00:11:57.609 "raid_level": "raid1", 00:11:57.609 "superblock": false, 00:11:57.609 "num_base_bdevs": 4, 00:11:57.609 "num_base_bdevs_discovered": 2, 00:11:57.609 "num_base_bdevs_operational": 4, 00:11:57.609 "base_bdevs_list": [ 00:11:57.609 { 00:11:57.609 "name": null, 00:11:57.609 "uuid": "935038fd-0b18-45bc-8a97-bff267d07bee", 00:11:57.609 "is_configured": false, 00:11:57.609 "data_offset": 0, 00:11:57.609 "data_size": 65536 00:11:57.609 }, 00:11:57.609 { 00:11:57.609 "name": null, 00:11:57.609 "uuid": "d981d77c-c275-467b-93c3-1ee29032f414", 00:11:57.609 "is_configured": false, 00:11:57.609 "data_offset": 0, 00:11:57.609 "data_size": 65536 00:11:57.609 }, 00:11:57.609 { 00:11:57.609 "name": "BaseBdev3", 00:11:57.609 "uuid": "ef885126-3aef-4836-9366-b19706a7ab30", 00:11:57.609 "is_configured": true, 00:11:57.609 "data_offset": 0, 00:11:57.609 "data_size": 65536 00:11:57.609 }, 00:11:57.609 { 00:11:57.609 "name": "BaseBdev4", 00:11:57.609 "uuid": "c4b080a1-6146-456f-a34c-d266da3bd4cd", 00:11:57.609 "is_configured": true, 00:11:57.609 "data_offset": 0, 00:11:57.609 "data_size": 65536 00:11:57.609 } 00:11:57.609 ] 00:11:57.609 }' 00:11:57.609 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.610 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.180 [2024-11-18 04:00:54.573375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.180 "name": "Existed_Raid", 00:11:58.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.180 "strip_size_kb": 0, 00:11:58.180 "state": "configuring", 00:11:58.180 "raid_level": "raid1", 00:11:58.180 "superblock": false, 00:11:58.180 "num_base_bdevs": 4, 00:11:58.180 "num_base_bdevs_discovered": 3, 00:11:58.180 "num_base_bdevs_operational": 4, 00:11:58.180 "base_bdevs_list": [ 00:11:58.180 { 00:11:58.180 "name": null, 00:11:58.180 "uuid": "935038fd-0b18-45bc-8a97-bff267d07bee", 00:11:58.180 "is_configured": false, 00:11:58.180 "data_offset": 0, 00:11:58.180 "data_size": 65536 00:11:58.180 }, 00:11:58.180 { 00:11:58.180 "name": "BaseBdev2", 00:11:58.180 "uuid": "d981d77c-c275-467b-93c3-1ee29032f414", 00:11:58.180 "is_configured": true, 00:11:58.180 "data_offset": 0, 00:11:58.180 "data_size": 65536 00:11:58.180 }, 00:11:58.180 { 00:11:58.180 "name": "BaseBdev3", 00:11:58.180 "uuid": "ef885126-3aef-4836-9366-b19706a7ab30", 00:11:58.180 "is_configured": true, 00:11:58.180 "data_offset": 0, 00:11:58.180 "data_size": 65536 00:11:58.180 }, 00:11:58.180 { 00:11:58.180 "name": "BaseBdev4", 00:11:58.180 "uuid": "c4b080a1-6146-456f-a34c-d266da3bd4cd", 00:11:58.180 "is_configured": true, 00:11:58.180 "data_offset": 0, 00:11:58.180 "data_size": 65536 00:11:58.180 } 00:11:58.180 ] 00:11:58.180 }' 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.180 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.440 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:58.440 04:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.440 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.440 04:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.440 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.440 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:58.440 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.440 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.440 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.440 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:58.440 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 935038fd-0b18-45bc-8a97-bff267d07bee 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.700 [2024-11-18 04:00:55.128166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:58.700 [2024-11-18 04:00:55.128306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:58.700 [2024-11-18 04:00:55.128335] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:58.700 [2024-11-18 04:00:55.128658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:58.700 [2024-11-18 04:00:55.128919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:58.700 [2024-11-18 04:00:55.128964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:58.700 [2024-11-18 04:00:55.129316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.700 NewBaseBdev 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.700 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.701 [ 00:11:58.701 { 00:11:58.701 "name": "NewBaseBdev", 00:11:58.701 "aliases": [ 00:11:58.701 "935038fd-0b18-45bc-8a97-bff267d07bee" 00:11:58.701 ], 00:11:58.701 "product_name": "Malloc disk", 00:11:58.701 "block_size": 512, 00:11:58.701 "num_blocks": 65536, 00:11:58.701 "uuid": "935038fd-0b18-45bc-8a97-bff267d07bee", 00:11:58.701 "assigned_rate_limits": { 00:11:58.701 "rw_ios_per_sec": 0, 00:11:58.701 "rw_mbytes_per_sec": 0, 00:11:58.701 "r_mbytes_per_sec": 0, 00:11:58.701 "w_mbytes_per_sec": 0 00:11:58.701 }, 00:11:58.701 "claimed": true, 00:11:58.701 "claim_type": "exclusive_write", 00:11:58.701 "zoned": false, 00:11:58.701 "supported_io_types": { 00:11:58.701 "read": true, 00:11:58.701 "write": true, 00:11:58.701 "unmap": true, 00:11:58.701 "flush": true, 00:11:58.701 "reset": true, 00:11:58.701 "nvme_admin": false, 00:11:58.701 "nvme_io": false, 00:11:58.701 "nvme_io_md": false, 00:11:58.701 "write_zeroes": true, 00:11:58.701 "zcopy": true, 00:11:58.701 "get_zone_info": false, 00:11:58.701 "zone_management": false, 00:11:58.701 "zone_append": false, 00:11:58.701 "compare": false, 00:11:58.701 "compare_and_write": false, 00:11:58.701 "abort": true, 00:11:58.701 "seek_hole": false, 00:11:58.701 "seek_data": false, 00:11:58.701 "copy": true, 00:11:58.701 "nvme_iov_md": false 00:11:58.701 }, 00:11:58.701 "memory_domains": [ 00:11:58.701 { 00:11:58.701 "dma_device_id": "system", 00:11:58.701 "dma_device_type": 1 00:11:58.701 }, 00:11:58.701 { 00:11:58.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.701 "dma_device_type": 2 00:11:58.701 } 00:11:58.701 ], 00:11:58.701 "driver_specific": {} 00:11:58.701 } 00:11:58.701 ] 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.701 "name": "Existed_Raid", 00:11:58.701 "uuid": "7df62f72-82e0-43bf-a564-674156135200", 00:11:58.701 "strip_size_kb": 0, 00:11:58.701 "state": "online", 00:11:58.701 "raid_level": "raid1", 00:11:58.701 "superblock": false, 00:11:58.701 "num_base_bdevs": 4, 00:11:58.701 "num_base_bdevs_discovered": 4, 00:11:58.701 "num_base_bdevs_operational": 4, 00:11:58.701 "base_bdevs_list": [ 00:11:58.701 { 00:11:58.701 "name": "NewBaseBdev", 00:11:58.701 "uuid": "935038fd-0b18-45bc-8a97-bff267d07bee", 00:11:58.701 "is_configured": true, 00:11:58.701 "data_offset": 0, 00:11:58.701 "data_size": 65536 00:11:58.701 }, 00:11:58.701 { 00:11:58.701 "name": "BaseBdev2", 00:11:58.701 "uuid": "d981d77c-c275-467b-93c3-1ee29032f414", 00:11:58.701 "is_configured": true, 00:11:58.701 "data_offset": 0, 00:11:58.701 "data_size": 65536 00:11:58.701 }, 00:11:58.701 { 00:11:58.701 "name": "BaseBdev3", 00:11:58.701 "uuid": "ef885126-3aef-4836-9366-b19706a7ab30", 00:11:58.701 "is_configured": true, 00:11:58.701 "data_offset": 0, 00:11:58.701 "data_size": 65536 00:11:58.701 }, 00:11:58.701 { 00:11:58.701 "name": "BaseBdev4", 00:11:58.701 "uuid": "c4b080a1-6146-456f-a34c-d266da3bd4cd", 00:11:58.701 "is_configured": true, 00:11:58.701 "data_offset": 0, 00:11:58.701 "data_size": 65536 00:11:58.701 } 00:11:58.701 ] 00:11:58.701 }' 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.701 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.271 [2024-11-18 04:00:55.651909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.271 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.271 "name": "Existed_Raid", 00:11:59.271 "aliases": [ 00:11:59.271 "7df62f72-82e0-43bf-a564-674156135200" 00:11:59.271 ], 00:11:59.271 "product_name": "Raid Volume", 00:11:59.271 "block_size": 512, 00:11:59.271 "num_blocks": 65536, 00:11:59.271 "uuid": "7df62f72-82e0-43bf-a564-674156135200", 00:11:59.271 "assigned_rate_limits": { 00:11:59.271 "rw_ios_per_sec": 0, 00:11:59.271 "rw_mbytes_per_sec": 0, 00:11:59.271 "r_mbytes_per_sec": 0, 00:11:59.271 "w_mbytes_per_sec": 0 00:11:59.272 }, 00:11:59.272 "claimed": false, 00:11:59.272 "zoned": false, 00:11:59.272 "supported_io_types": { 00:11:59.272 "read": true, 00:11:59.272 "write": true, 00:11:59.272 "unmap": false, 00:11:59.272 "flush": false, 00:11:59.272 "reset": true, 00:11:59.272 "nvme_admin": false, 00:11:59.272 "nvme_io": false, 00:11:59.272 "nvme_io_md": false, 00:11:59.272 "write_zeroes": true, 00:11:59.272 "zcopy": false, 00:11:59.272 "get_zone_info": false, 00:11:59.272 "zone_management": false, 00:11:59.272 "zone_append": false, 00:11:59.272 "compare": false, 00:11:59.272 "compare_and_write": false, 00:11:59.272 "abort": false, 00:11:59.272 "seek_hole": false, 00:11:59.272 "seek_data": false, 00:11:59.272 "copy": false, 00:11:59.272 "nvme_iov_md": false 00:11:59.272 }, 00:11:59.272 "memory_domains": [ 00:11:59.272 { 00:11:59.272 "dma_device_id": "system", 00:11:59.272 "dma_device_type": 1 00:11:59.272 }, 00:11:59.272 { 00:11:59.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.272 "dma_device_type": 2 00:11:59.272 }, 00:11:59.272 { 00:11:59.272 "dma_device_id": "system", 00:11:59.272 "dma_device_type": 1 00:11:59.272 }, 00:11:59.272 { 00:11:59.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.272 "dma_device_type": 2 00:11:59.272 }, 00:11:59.272 { 00:11:59.272 "dma_device_id": "system", 00:11:59.272 "dma_device_type": 1 00:11:59.272 }, 00:11:59.272 { 00:11:59.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.272 "dma_device_type": 2 00:11:59.272 }, 00:11:59.272 { 00:11:59.272 "dma_device_id": "system", 00:11:59.272 "dma_device_type": 1 00:11:59.272 }, 00:11:59.272 { 00:11:59.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.272 "dma_device_type": 2 00:11:59.272 } 00:11:59.272 ], 00:11:59.272 "driver_specific": { 00:11:59.272 "raid": { 00:11:59.272 "uuid": "7df62f72-82e0-43bf-a564-674156135200", 00:11:59.272 "strip_size_kb": 0, 00:11:59.272 "state": "online", 00:11:59.272 "raid_level": "raid1", 00:11:59.272 "superblock": false, 00:11:59.272 "num_base_bdevs": 4, 00:11:59.272 "num_base_bdevs_discovered": 4, 00:11:59.272 "num_base_bdevs_operational": 4, 00:11:59.272 "base_bdevs_list": [ 00:11:59.272 { 00:11:59.272 "name": "NewBaseBdev", 00:11:59.272 "uuid": "935038fd-0b18-45bc-8a97-bff267d07bee", 00:11:59.272 "is_configured": true, 00:11:59.272 "data_offset": 0, 00:11:59.272 "data_size": 65536 00:11:59.272 }, 00:11:59.272 { 00:11:59.272 "name": "BaseBdev2", 00:11:59.272 "uuid": "d981d77c-c275-467b-93c3-1ee29032f414", 00:11:59.272 "is_configured": true, 00:11:59.272 "data_offset": 0, 00:11:59.272 "data_size": 65536 00:11:59.272 }, 00:11:59.272 { 00:11:59.272 "name": "BaseBdev3", 00:11:59.272 "uuid": "ef885126-3aef-4836-9366-b19706a7ab30", 00:11:59.272 "is_configured": true, 00:11:59.272 "data_offset": 0, 00:11:59.272 "data_size": 65536 00:11:59.272 }, 00:11:59.272 { 00:11:59.272 "name": "BaseBdev4", 00:11:59.272 "uuid": "c4b080a1-6146-456f-a34c-d266da3bd4cd", 00:11:59.272 "is_configured": true, 00:11:59.272 "data_offset": 0, 00:11:59.272 "data_size": 65536 00:11:59.272 } 00:11:59.272 ] 00:11:59.272 } 00:11:59.272 } 00:11:59.272 }' 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:59.272 BaseBdev2 00:11:59.272 BaseBdev3 00:11:59.272 BaseBdev4' 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.272 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.533 [2024-11-18 04:00:55.962966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:59.533 [2024-11-18 04:00:55.963013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.533 [2024-11-18 04:00:55.963126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.533 [2024-11-18 04:00:55.963446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.533 [2024-11-18 04:00:55.963461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73169 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73169 ']' 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73169 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73169 00:11:59.533 killing process with pid 73169 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73169' 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73169 00:11:59.533 [2024-11-18 04:00:55.998026] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.533 04:00:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73169 00:12:00.103 [2024-11-18 04:00:56.434056] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.043 ************************************ 00:12:01.043 END TEST raid_state_function_test 00:12:01.043 ************************************ 00:12:01.043 04:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:01.043 00:12:01.043 real 0m11.528s 00:12:01.043 user 0m18.021s 00:12:01.043 sys 0m2.126s 00:12:01.043 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.043 04:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.043 04:00:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:01.043 04:00:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:01.043 04:00:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.043 04:00:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.303 ************************************ 00:12:01.303 START TEST raid_state_function_test_sb 00:12:01.303 ************************************ 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.303 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73840 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:01.304 Process raid pid: 73840 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73840' 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73840 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73840 ']' 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.304 04:00:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.304 [2024-11-18 04:00:57.792178] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:01.304 [2024-11-18 04:00:57.792830] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.564 [2024-11-18 04:00:57.949636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.564 [2024-11-18 04:00:58.089406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.823 [2024-11-18 04:00:58.325333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.823 [2024-11-18 04:00:58.325479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.084 [2024-11-18 04:00:58.631006] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.084 [2024-11-18 04:00:58.631072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.084 [2024-11-18 04:00:58.631084] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.084 [2024-11-18 04:00:58.631093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.084 [2024-11-18 04:00:58.631099] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.084 [2024-11-18 04:00:58.631107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.084 [2024-11-18 04:00:58.631119] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.084 [2024-11-18 04:00:58.631127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.084 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.085 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.085 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.085 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.085 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.085 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.085 04:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.085 04:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.085 04:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.085 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.085 "name": "Existed_Raid", 00:12:02.085 "uuid": "f2ceeebe-30ae-4ea2-81d5-0652a0837d75", 00:12:02.085 "strip_size_kb": 0, 00:12:02.085 "state": "configuring", 00:12:02.085 "raid_level": "raid1", 00:12:02.085 "superblock": true, 00:12:02.085 "num_base_bdevs": 4, 00:12:02.085 "num_base_bdevs_discovered": 0, 00:12:02.085 "num_base_bdevs_operational": 4, 00:12:02.085 "base_bdevs_list": [ 00:12:02.085 { 00:12:02.085 "name": "BaseBdev1", 00:12:02.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.085 "is_configured": false, 00:12:02.085 "data_offset": 0, 00:12:02.085 "data_size": 0 00:12:02.085 }, 00:12:02.085 { 00:12:02.085 "name": "BaseBdev2", 00:12:02.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.085 "is_configured": false, 00:12:02.085 "data_offset": 0, 00:12:02.085 "data_size": 0 00:12:02.085 }, 00:12:02.085 { 00:12:02.085 "name": "BaseBdev3", 00:12:02.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.085 "is_configured": false, 00:12:02.085 "data_offset": 0, 00:12:02.085 "data_size": 0 00:12:02.085 }, 00:12:02.085 { 00:12:02.085 "name": "BaseBdev4", 00:12:02.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.085 "is_configured": false, 00:12:02.085 "data_offset": 0, 00:12:02.085 "data_size": 0 00:12:02.085 } 00:12:02.085 ] 00:12:02.085 }' 00:12:02.085 04:00:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.085 04:00:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 [2024-11-18 04:00:59.046243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.656 [2024-11-18 04:00:59.046398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 [2024-11-18 04:00:59.058184] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.656 [2024-11-18 04:00:59.058265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.656 [2024-11-18 04:00:59.058292] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.656 [2024-11-18 04:00:59.058315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.656 [2024-11-18 04:00:59.058333] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.656 [2024-11-18 04:00:59.058353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.656 [2024-11-18 04:00:59.058370] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.656 [2024-11-18 04:00:59.058391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 [2024-11-18 04:00:59.111792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.656 BaseBdev1 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 [ 00:12:02.656 { 00:12:02.656 "name": "BaseBdev1", 00:12:02.656 "aliases": [ 00:12:02.656 "1f9bbfc2-36c3-4a16-85ab-cc53fa7446de" 00:12:02.656 ], 00:12:02.656 "product_name": "Malloc disk", 00:12:02.656 "block_size": 512, 00:12:02.656 "num_blocks": 65536, 00:12:02.656 "uuid": "1f9bbfc2-36c3-4a16-85ab-cc53fa7446de", 00:12:02.656 "assigned_rate_limits": { 00:12:02.656 "rw_ios_per_sec": 0, 00:12:02.656 "rw_mbytes_per_sec": 0, 00:12:02.656 "r_mbytes_per_sec": 0, 00:12:02.656 "w_mbytes_per_sec": 0 00:12:02.656 }, 00:12:02.656 "claimed": true, 00:12:02.656 "claim_type": "exclusive_write", 00:12:02.656 "zoned": false, 00:12:02.656 "supported_io_types": { 00:12:02.656 "read": true, 00:12:02.656 "write": true, 00:12:02.656 "unmap": true, 00:12:02.656 "flush": true, 00:12:02.656 "reset": true, 00:12:02.656 "nvme_admin": false, 00:12:02.656 "nvme_io": false, 00:12:02.656 "nvme_io_md": false, 00:12:02.656 "write_zeroes": true, 00:12:02.656 "zcopy": true, 00:12:02.656 "get_zone_info": false, 00:12:02.656 "zone_management": false, 00:12:02.656 "zone_append": false, 00:12:02.656 "compare": false, 00:12:02.656 "compare_and_write": false, 00:12:02.656 "abort": true, 00:12:02.656 "seek_hole": false, 00:12:02.656 "seek_data": false, 00:12:02.656 "copy": true, 00:12:02.656 "nvme_iov_md": false 00:12:02.656 }, 00:12:02.656 "memory_domains": [ 00:12:02.656 { 00:12:02.656 "dma_device_id": "system", 00:12:02.656 "dma_device_type": 1 00:12:02.656 }, 00:12:02.656 { 00:12:02.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.656 "dma_device_type": 2 00:12:02.656 } 00:12:02.656 ], 00:12:02.656 "driver_specific": {} 00:12:02.656 } 00:12:02.656 ] 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.656 "name": "Existed_Raid", 00:12:02.656 "uuid": "a6cf6f0c-637b-451b-a91a-e7530f34f0c1", 00:12:02.656 "strip_size_kb": 0, 00:12:02.656 "state": "configuring", 00:12:02.656 "raid_level": "raid1", 00:12:02.656 "superblock": true, 00:12:02.656 "num_base_bdevs": 4, 00:12:02.656 "num_base_bdevs_discovered": 1, 00:12:02.656 "num_base_bdevs_operational": 4, 00:12:02.656 "base_bdevs_list": [ 00:12:02.656 { 00:12:02.656 "name": "BaseBdev1", 00:12:02.656 "uuid": "1f9bbfc2-36c3-4a16-85ab-cc53fa7446de", 00:12:02.656 "is_configured": true, 00:12:02.656 "data_offset": 2048, 00:12:02.656 "data_size": 63488 00:12:02.656 }, 00:12:02.656 { 00:12:02.656 "name": "BaseBdev2", 00:12:02.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.656 "is_configured": false, 00:12:02.656 "data_offset": 0, 00:12:02.656 "data_size": 0 00:12:02.656 }, 00:12:02.656 { 00:12:02.656 "name": "BaseBdev3", 00:12:02.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.656 "is_configured": false, 00:12:02.656 "data_offset": 0, 00:12:02.656 "data_size": 0 00:12:02.656 }, 00:12:02.656 { 00:12:02.656 "name": "BaseBdev4", 00:12:02.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.656 "is_configured": false, 00:12:02.656 "data_offset": 0, 00:12:02.656 "data_size": 0 00:12:02.656 } 00:12:02.656 ] 00:12:02.656 }' 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.656 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.916 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.916 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.916 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.175 [2024-11-18 04:00:59.555164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.176 [2024-11-18 04:00:59.555245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.176 [2024-11-18 04:00:59.567181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.176 [2024-11-18 04:00:59.569376] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.176 [2024-11-18 04:00:59.569457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.176 [2024-11-18 04:00:59.569489] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.176 [2024-11-18 04:00:59.569516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.176 [2024-11-18 04:00:59.569551] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:03.176 [2024-11-18 04:00:59.569573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.176 "name": "Existed_Raid", 00:12:03.176 "uuid": "dfb192bd-bdbb-4596-81e8-40711a5a8956", 00:12:03.176 "strip_size_kb": 0, 00:12:03.176 "state": "configuring", 00:12:03.176 "raid_level": "raid1", 00:12:03.176 "superblock": true, 00:12:03.176 "num_base_bdevs": 4, 00:12:03.176 "num_base_bdevs_discovered": 1, 00:12:03.176 "num_base_bdevs_operational": 4, 00:12:03.176 "base_bdevs_list": [ 00:12:03.176 { 00:12:03.176 "name": "BaseBdev1", 00:12:03.176 "uuid": "1f9bbfc2-36c3-4a16-85ab-cc53fa7446de", 00:12:03.176 "is_configured": true, 00:12:03.176 "data_offset": 2048, 00:12:03.176 "data_size": 63488 00:12:03.176 }, 00:12:03.176 { 00:12:03.176 "name": "BaseBdev2", 00:12:03.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.176 "is_configured": false, 00:12:03.176 "data_offset": 0, 00:12:03.176 "data_size": 0 00:12:03.176 }, 00:12:03.176 { 00:12:03.176 "name": "BaseBdev3", 00:12:03.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.176 "is_configured": false, 00:12:03.176 "data_offset": 0, 00:12:03.176 "data_size": 0 00:12:03.176 }, 00:12:03.176 { 00:12:03.176 "name": "BaseBdev4", 00:12:03.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.176 "is_configured": false, 00:12:03.176 "data_offset": 0, 00:12:03.176 "data_size": 0 00:12:03.176 } 00:12:03.176 ] 00:12:03.176 }' 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.176 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.436 04:00:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:03.436 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.436 04:00:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.436 [2024-11-18 04:01:00.041936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.436 BaseBdev2 00:12:03.436 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.436 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.437 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.437 [ 00:12:03.437 { 00:12:03.437 "name": "BaseBdev2", 00:12:03.437 "aliases": [ 00:12:03.437 "383dab5a-b733-4444-89cb-96a1f272eb2a" 00:12:03.437 ], 00:12:03.437 "product_name": "Malloc disk", 00:12:03.437 "block_size": 512, 00:12:03.437 "num_blocks": 65536, 00:12:03.437 "uuid": "383dab5a-b733-4444-89cb-96a1f272eb2a", 00:12:03.437 "assigned_rate_limits": { 00:12:03.437 "rw_ios_per_sec": 0, 00:12:03.437 "rw_mbytes_per_sec": 0, 00:12:03.437 "r_mbytes_per_sec": 0, 00:12:03.437 "w_mbytes_per_sec": 0 00:12:03.437 }, 00:12:03.437 "claimed": true, 00:12:03.437 "claim_type": "exclusive_write", 00:12:03.437 "zoned": false, 00:12:03.437 "supported_io_types": { 00:12:03.437 "read": true, 00:12:03.437 "write": true, 00:12:03.437 "unmap": true, 00:12:03.437 "flush": true, 00:12:03.437 "reset": true, 00:12:03.437 "nvme_admin": false, 00:12:03.437 "nvme_io": false, 00:12:03.437 "nvme_io_md": false, 00:12:03.437 "write_zeroes": true, 00:12:03.437 "zcopy": true, 00:12:03.437 "get_zone_info": false, 00:12:03.437 "zone_management": false, 00:12:03.437 "zone_append": false, 00:12:03.437 "compare": false, 00:12:03.437 "compare_and_write": false, 00:12:03.437 "abort": true, 00:12:03.437 "seek_hole": false, 00:12:03.437 "seek_data": false, 00:12:03.437 "copy": true, 00:12:03.437 "nvme_iov_md": false 00:12:03.437 }, 00:12:03.437 "memory_domains": [ 00:12:03.437 { 00:12:03.437 "dma_device_id": "system", 00:12:03.703 "dma_device_type": 1 00:12:03.703 }, 00:12:03.703 { 00:12:03.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.703 "dma_device_type": 2 00:12:03.703 } 00:12:03.703 ], 00:12:03.703 "driver_specific": {} 00:12:03.703 } 00:12:03.703 ] 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.703 "name": "Existed_Raid", 00:12:03.703 "uuid": "dfb192bd-bdbb-4596-81e8-40711a5a8956", 00:12:03.703 "strip_size_kb": 0, 00:12:03.703 "state": "configuring", 00:12:03.703 "raid_level": "raid1", 00:12:03.703 "superblock": true, 00:12:03.703 "num_base_bdevs": 4, 00:12:03.703 "num_base_bdevs_discovered": 2, 00:12:03.703 "num_base_bdevs_operational": 4, 00:12:03.703 "base_bdevs_list": [ 00:12:03.703 { 00:12:03.703 "name": "BaseBdev1", 00:12:03.703 "uuid": "1f9bbfc2-36c3-4a16-85ab-cc53fa7446de", 00:12:03.703 "is_configured": true, 00:12:03.703 "data_offset": 2048, 00:12:03.703 "data_size": 63488 00:12:03.703 }, 00:12:03.703 { 00:12:03.703 "name": "BaseBdev2", 00:12:03.703 "uuid": "383dab5a-b733-4444-89cb-96a1f272eb2a", 00:12:03.703 "is_configured": true, 00:12:03.703 "data_offset": 2048, 00:12:03.703 "data_size": 63488 00:12:03.703 }, 00:12:03.703 { 00:12:03.703 "name": "BaseBdev3", 00:12:03.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.703 "is_configured": false, 00:12:03.703 "data_offset": 0, 00:12:03.703 "data_size": 0 00:12:03.703 }, 00:12:03.703 { 00:12:03.703 "name": "BaseBdev4", 00:12:03.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.703 "is_configured": false, 00:12:03.703 "data_offset": 0, 00:12:03.703 "data_size": 0 00:12:03.703 } 00:12:03.703 ] 00:12:03.703 }' 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.703 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.972 [2024-11-18 04:01:00.584848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.972 BaseBdev3 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.972 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.972 [ 00:12:03.972 { 00:12:03.972 "name": "BaseBdev3", 00:12:03.972 "aliases": [ 00:12:03.972 "db6f6737-fca6-47af-a504-913ca9e17435" 00:12:03.972 ], 00:12:04.232 "product_name": "Malloc disk", 00:12:04.232 "block_size": 512, 00:12:04.232 "num_blocks": 65536, 00:12:04.232 "uuid": "db6f6737-fca6-47af-a504-913ca9e17435", 00:12:04.232 "assigned_rate_limits": { 00:12:04.232 "rw_ios_per_sec": 0, 00:12:04.232 "rw_mbytes_per_sec": 0, 00:12:04.232 "r_mbytes_per_sec": 0, 00:12:04.232 "w_mbytes_per_sec": 0 00:12:04.232 }, 00:12:04.232 "claimed": true, 00:12:04.232 "claim_type": "exclusive_write", 00:12:04.232 "zoned": false, 00:12:04.232 "supported_io_types": { 00:12:04.232 "read": true, 00:12:04.232 "write": true, 00:12:04.232 "unmap": true, 00:12:04.232 "flush": true, 00:12:04.232 "reset": true, 00:12:04.232 "nvme_admin": false, 00:12:04.232 "nvme_io": false, 00:12:04.232 "nvme_io_md": false, 00:12:04.232 "write_zeroes": true, 00:12:04.232 "zcopy": true, 00:12:04.232 "get_zone_info": false, 00:12:04.232 "zone_management": false, 00:12:04.232 "zone_append": false, 00:12:04.232 "compare": false, 00:12:04.232 "compare_and_write": false, 00:12:04.232 "abort": true, 00:12:04.232 "seek_hole": false, 00:12:04.232 "seek_data": false, 00:12:04.232 "copy": true, 00:12:04.232 "nvme_iov_md": false 00:12:04.232 }, 00:12:04.232 "memory_domains": [ 00:12:04.232 { 00:12:04.232 "dma_device_id": "system", 00:12:04.233 "dma_device_type": 1 00:12:04.233 }, 00:12:04.233 { 00:12:04.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.233 "dma_device_type": 2 00:12:04.233 } 00:12:04.233 ], 00:12:04.233 "driver_specific": {} 00:12:04.233 } 00:12:04.233 ] 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.233 "name": "Existed_Raid", 00:12:04.233 "uuid": "dfb192bd-bdbb-4596-81e8-40711a5a8956", 00:12:04.233 "strip_size_kb": 0, 00:12:04.233 "state": "configuring", 00:12:04.233 "raid_level": "raid1", 00:12:04.233 "superblock": true, 00:12:04.233 "num_base_bdevs": 4, 00:12:04.233 "num_base_bdevs_discovered": 3, 00:12:04.233 "num_base_bdevs_operational": 4, 00:12:04.233 "base_bdevs_list": [ 00:12:04.233 { 00:12:04.233 "name": "BaseBdev1", 00:12:04.233 "uuid": "1f9bbfc2-36c3-4a16-85ab-cc53fa7446de", 00:12:04.233 "is_configured": true, 00:12:04.233 "data_offset": 2048, 00:12:04.233 "data_size": 63488 00:12:04.233 }, 00:12:04.233 { 00:12:04.233 "name": "BaseBdev2", 00:12:04.233 "uuid": "383dab5a-b733-4444-89cb-96a1f272eb2a", 00:12:04.233 "is_configured": true, 00:12:04.233 "data_offset": 2048, 00:12:04.233 "data_size": 63488 00:12:04.233 }, 00:12:04.233 { 00:12:04.233 "name": "BaseBdev3", 00:12:04.233 "uuid": "db6f6737-fca6-47af-a504-913ca9e17435", 00:12:04.233 "is_configured": true, 00:12:04.233 "data_offset": 2048, 00:12:04.233 "data_size": 63488 00:12:04.233 }, 00:12:04.233 { 00:12:04.233 "name": "BaseBdev4", 00:12:04.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.233 "is_configured": false, 00:12:04.233 "data_offset": 0, 00:12:04.233 "data_size": 0 00:12:04.233 } 00:12:04.233 ] 00:12:04.233 }' 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.233 04:01:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.494 [2024-11-18 04:01:01.117519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:04.494 [2024-11-18 04:01:01.117854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:04.494 [2024-11-18 04:01:01.117872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.494 BaseBdev4 00:12:04.494 [2024-11-18 04:01:01.118187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:04.494 [2024-11-18 04:01:01.118369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:04.494 [2024-11-18 04:01:01.118385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:04.494 [2024-11-18 04:01:01.118573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.494 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.754 [ 00:12:04.754 { 00:12:04.754 "name": "BaseBdev4", 00:12:04.754 "aliases": [ 00:12:04.754 "d076bdf6-d584-4be7-b062-43a4cf590f55" 00:12:04.754 ], 00:12:04.754 "product_name": "Malloc disk", 00:12:04.754 "block_size": 512, 00:12:04.754 "num_blocks": 65536, 00:12:04.754 "uuid": "d076bdf6-d584-4be7-b062-43a4cf590f55", 00:12:04.754 "assigned_rate_limits": { 00:12:04.754 "rw_ios_per_sec": 0, 00:12:04.754 "rw_mbytes_per_sec": 0, 00:12:04.754 "r_mbytes_per_sec": 0, 00:12:04.754 "w_mbytes_per_sec": 0 00:12:04.754 }, 00:12:04.754 "claimed": true, 00:12:04.754 "claim_type": "exclusive_write", 00:12:04.754 "zoned": false, 00:12:04.755 "supported_io_types": { 00:12:04.755 "read": true, 00:12:04.755 "write": true, 00:12:04.755 "unmap": true, 00:12:04.755 "flush": true, 00:12:04.755 "reset": true, 00:12:04.755 "nvme_admin": false, 00:12:04.755 "nvme_io": false, 00:12:04.755 "nvme_io_md": false, 00:12:04.755 "write_zeroes": true, 00:12:04.755 "zcopy": true, 00:12:04.755 "get_zone_info": false, 00:12:04.755 "zone_management": false, 00:12:04.755 "zone_append": false, 00:12:04.755 "compare": false, 00:12:04.755 "compare_and_write": false, 00:12:04.755 "abort": true, 00:12:04.755 "seek_hole": false, 00:12:04.755 "seek_data": false, 00:12:04.755 "copy": true, 00:12:04.755 "nvme_iov_md": false 00:12:04.755 }, 00:12:04.755 "memory_domains": [ 00:12:04.755 { 00:12:04.755 "dma_device_id": "system", 00:12:04.755 "dma_device_type": 1 00:12:04.755 }, 00:12:04.755 { 00:12:04.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.755 "dma_device_type": 2 00:12:04.755 } 00:12:04.755 ], 00:12:04.755 "driver_specific": {} 00:12:04.755 } 00:12:04.755 ] 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.755 "name": "Existed_Raid", 00:12:04.755 "uuid": "dfb192bd-bdbb-4596-81e8-40711a5a8956", 00:12:04.755 "strip_size_kb": 0, 00:12:04.755 "state": "online", 00:12:04.755 "raid_level": "raid1", 00:12:04.755 "superblock": true, 00:12:04.755 "num_base_bdevs": 4, 00:12:04.755 "num_base_bdevs_discovered": 4, 00:12:04.755 "num_base_bdevs_operational": 4, 00:12:04.755 "base_bdevs_list": [ 00:12:04.755 { 00:12:04.755 "name": "BaseBdev1", 00:12:04.755 "uuid": "1f9bbfc2-36c3-4a16-85ab-cc53fa7446de", 00:12:04.755 "is_configured": true, 00:12:04.755 "data_offset": 2048, 00:12:04.755 "data_size": 63488 00:12:04.755 }, 00:12:04.755 { 00:12:04.755 "name": "BaseBdev2", 00:12:04.755 "uuid": "383dab5a-b733-4444-89cb-96a1f272eb2a", 00:12:04.755 "is_configured": true, 00:12:04.755 "data_offset": 2048, 00:12:04.755 "data_size": 63488 00:12:04.755 }, 00:12:04.755 { 00:12:04.755 "name": "BaseBdev3", 00:12:04.755 "uuid": "db6f6737-fca6-47af-a504-913ca9e17435", 00:12:04.755 "is_configured": true, 00:12:04.755 "data_offset": 2048, 00:12:04.755 "data_size": 63488 00:12:04.755 }, 00:12:04.755 { 00:12:04.755 "name": "BaseBdev4", 00:12:04.755 "uuid": "d076bdf6-d584-4be7-b062-43a4cf590f55", 00:12:04.755 "is_configured": true, 00:12:04.755 "data_offset": 2048, 00:12:04.755 "data_size": 63488 00:12:04.755 } 00:12:04.755 ] 00:12:04.755 }' 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.755 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.015 [2024-11-18 04:01:01.605296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.015 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.015 "name": "Existed_Raid", 00:12:05.015 "aliases": [ 00:12:05.015 "dfb192bd-bdbb-4596-81e8-40711a5a8956" 00:12:05.015 ], 00:12:05.015 "product_name": "Raid Volume", 00:12:05.015 "block_size": 512, 00:12:05.015 "num_blocks": 63488, 00:12:05.015 "uuid": "dfb192bd-bdbb-4596-81e8-40711a5a8956", 00:12:05.015 "assigned_rate_limits": { 00:12:05.015 "rw_ios_per_sec": 0, 00:12:05.015 "rw_mbytes_per_sec": 0, 00:12:05.015 "r_mbytes_per_sec": 0, 00:12:05.015 "w_mbytes_per_sec": 0 00:12:05.015 }, 00:12:05.015 "claimed": false, 00:12:05.015 "zoned": false, 00:12:05.015 "supported_io_types": { 00:12:05.015 "read": true, 00:12:05.015 "write": true, 00:12:05.015 "unmap": false, 00:12:05.015 "flush": false, 00:12:05.015 "reset": true, 00:12:05.015 "nvme_admin": false, 00:12:05.015 "nvme_io": false, 00:12:05.015 "nvme_io_md": false, 00:12:05.015 "write_zeroes": true, 00:12:05.015 "zcopy": false, 00:12:05.015 "get_zone_info": false, 00:12:05.015 "zone_management": false, 00:12:05.015 "zone_append": false, 00:12:05.015 "compare": false, 00:12:05.015 "compare_and_write": false, 00:12:05.015 "abort": false, 00:12:05.015 "seek_hole": false, 00:12:05.015 "seek_data": false, 00:12:05.015 "copy": false, 00:12:05.015 "nvme_iov_md": false 00:12:05.015 }, 00:12:05.015 "memory_domains": [ 00:12:05.015 { 00:12:05.015 "dma_device_id": "system", 00:12:05.015 "dma_device_type": 1 00:12:05.015 }, 00:12:05.015 { 00:12:05.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.015 "dma_device_type": 2 00:12:05.015 }, 00:12:05.015 { 00:12:05.015 "dma_device_id": "system", 00:12:05.015 "dma_device_type": 1 00:12:05.015 }, 00:12:05.015 { 00:12:05.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.015 "dma_device_type": 2 00:12:05.015 }, 00:12:05.015 { 00:12:05.015 "dma_device_id": "system", 00:12:05.015 "dma_device_type": 1 00:12:05.015 }, 00:12:05.015 { 00:12:05.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.015 "dma_device_type": 2 00:12:05.015 }, 00:12:05.015 { 00:12:05.015 "dma_device_id": "system", 00:12:05.015 "dma_device_type": 1 00:12:05.015 }, 00:12:05.015 { 00:12:05.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.015 "dma_device_type": 2 00:12:05.015 } 00:12:05.015 ], 00:12:05.015 "driver_specific": { 00:12:05.015 "raid": { 00:12:05.015 "uuid": "dfb192bd-bdbb-4596-81e8-40711a5a8956", 00:12:05.015 "strip_size_kb": 0, 00:12:05.015 "state": "online", 00:12:05.015 "raid_level": "raid1", 00:12:05.015 "superblock": true, 00:12:05.015 "num_base_bdevs": 4, 00:12:05.015 "num_base_bdevs_discovered": 4, 00:12:05.015 "num_base_bdevs_operational": 4, 00:12:05.015 "base_bdevs_list": [ 00:12:05.015 { 00:12:05.015 "name": "BaseBdev1", 00:12:05.015 "uuid": "1f9bbfc2-36c3-4a16-85ab-cc53fa7446de", 00:12:05.015 "is_configured": true, 00:12:05.015 "data_offset": 2048, 00:12:05.015 "data_size": 63488 00:12:05.015 }, 00:12:05.015 { 00:12:05.015 "name": "BaseBdev2", 00:12:05.015 "uuid": "383dab5a-b733-4444-89cb-96a1f272eb2a", 00:12:05.015 "is_configured": true, 00:12:05.015 "data_offset": 2048, 00:12:05.015 "data_size": 63488 00:12:05.015 }, 00:12:05.015 { 00:12:05.015 "name": "BaseBdev3", 00:12:05.016 "uuid": "db6f6737-fca6-47af-a504-913ca9e17435", 00:12:05.016 "is_configured": true, 00:12:05.016 "data_offset": 2048, 00:12:05.016 "data_size": 63488 00:12:05.016 }, 00:12:05.016 { 00:12:05.016 "name": "BaseBdev4", 00:12:05.016 "uuid": "d076bdf6-d584-4be7-b062-43a4cf590f55", 00:12:05.016 "is_configured": true, 00:12:05.016 "data_offset": 2048, 00:12:05.016 "data_size": 63488 00:12:05.016 } 00:12:05.016 ] 00:12:05.016 } 00:12:05.016 } 00:12:05.016 }' 00:12:05.016 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.274 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:05.274 BaseBdev2 00:12:05.274 BaseBdev3 00:12:05.274 BaseBdev4' 00:12:05.274 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.274 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.274 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.274 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.275 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.534 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.534 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.534 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.534 04:01:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:05.534 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.534 04:01:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.534 [2024-11-18 04:01:01.952442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.534 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.535 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.535 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.535 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.535 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.535 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.535 "name": "Existed_Raid", 00:12:05.535 "uuid": "dfb192bd-bdbb-4596-81e8-40711a5a8956", 00:12:05.535 "strip_size_kb": 0, 00:12:05.535 "state": "online", 00:12:05.535 "raid_level": "raid1", 00:12:05.535 "superblock": true, 00:12:05.535 "num_base_bdevs": 4, 00:12:05.535 "num_base_bdevs_discovered": 3, 00:12:05.535 "num_base_bdevs_operational": 3, 00:12:05.535 "base_bdevs_list": [ 00:12:05.535 { 00:12:05.535 "name": null, 00:12:05.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.535 "is_configured": false, 00:12:05.535 "data_offset": 0, 00:12:05.535 "data_size": 63488 00:12:05.535 }, 00:12:05.535 { 00:12:05.535 "name": "BaseBdev2", 00:12:05.535 "uuid": "383dab5a-b733-4444-89cb-96a1f272eb2a", 00:12:05.535 "is_configured": true, 00:12:05.535 "data_offset": 2048, 00:12:05.535 "data_size": 63488 00:12:05.535 }, 00:12:05.535 { 00:12:05.535 "name": "BaseBdev3", 00:12:05.535 "uuid": "db6f6737-fca6-47af-a504-913ca9e17435", 00:12:05.535 "is_configured": true, 00:12:05.535 "data_offset": 2048, 00:12:05.535 "data_size": 63488 00:12:05.535 }, 00:12:05.535 { 00:12:05.535 "name": "BaseBdev4", 00:12:05.535 "uuid": "d076bdf6-d584-4be7-b062-43a4cf590f55", 00:12:05.535 "is_configured": true, 00:12:05.535 "data_offset": 2048, 00:12:05.535 "data_size": 63488 00:12:05.535 } 00:12:05.535 ] 00:12:05.535 }' 00:12:05.535 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.535 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.104 [2024-11-18 04:01:02.590150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:06.104 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.105 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.105 [2024-11-18 04:01:02.733218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.364 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.364 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.364 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.364 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.364 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.364 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.364 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.364 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.364 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.364 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.364 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:06.365 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.365 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.365 [2024-11-18 04:01:02.883248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:06.365 [2024-11-18 04:01:02.883359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.365 [2024-11-18 04:01:02.981563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.365 [2024-11-18 04:01:02.981622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.365 [2024-11-18 04:01:02.981634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:06.365 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.365 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.365 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.365 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.365 04:01:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:06.365 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.365 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.365 04:01:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.625 BaseBdev2 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.625 [ 00:12:06.625 { 00:12:06.625 "name": "BaseBdev2", 00:12:06.625 "aliases": [ 00:12:06.625 "735689b5-127d-4d79-bd4c-80cf0b549016" 00:12:06.625 ], 00:12:06.625 "product_name": "Malloc disk", 00:12:06.625 "block_size": 512, 00:12:06.625 "num_blocks": 65536, 00:12:06.625 "uuid": "735689b5-127d-4d79-bd4c-80cf0b549016", 00:12:06.625 "assigned_rate_limits": { 00:12:06.625 "rw_ios_per_sec": 0, 00:12:06.625 "rw_mbytes_per_sec": 0, 00:12:06.625 "r_mbytes_per_sec": 0, 00:12:06.625 "w_mbytes_per_sec": 0 00:12:06.625 }, 00:12:06.625 "claimed": false, 00:12:06.625 "zoned": false, 00:12:06.625 "supported_io_types": { 00:12:06.625 "read": true, 00:12:06.625 "write": true, 00:12:06.625 "unmap": true, 00:12:06.625 "flush": true, 00:12:06.625 "reset": true, 00:12:06.625 "nvme_admin": false, 00:12:06.625 "nvme_io": false, 00:12:06.625 "nvme_io_md": false, 00:12:06.625 "write_zeroes": true, 00:12:06.625 "zcopy": true, 00:12:06.625 "get_zone_info": false, 00:12:06.625 "zone_management": false, 00:12:06.625 "zone_append": false, 00:12:06.625 "compare": false, 00:12:06.625 "compare_and_write": false, 00:12:06.625 "abort": true, 00:12:06.625 "seek_hole": false, 00:12:06.625 "seek_data": false, 00:12:06.625 "copy": true, 00:12:06.625 "nvme_iov_md": false 00:12:06.625 }, 00:12:06.625 "memory_domains": [ 00:12:06.625 { 00:12:06.625 "dma_device_id": "system", 00:12:06.625 "dma_device_type": 1 00:12:06.625 }, 00:12:06.625 { 00:12:06.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.625 "dma_device_type": 2 00:12:06.625 } 00:12:06.625 ], 00:12:06.625 "driver_specific": {} 00:12:06.625 } 00:12:06.625 ] 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.625 BaseBdev3 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.625 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.626 [ 00:12:06.626 { 00:12:06.626 "name": "BaseBdev3", 00:12:06.626 "aliases": [ 00:12:06.626 "3faafe23-53cb-4468-9af9-f0b269ec9a8c" 00:12:06.626 ], 00:12:06.626 "product_name": "Malloc disk", 00:12:06.626 "block_size": 512, 00:12:06.626 "num_blocks": 65536, 00:12:06.626 "uuid": "3faafe23-53cb-4468-9af9-f0b269ec9a8c", 00:12:06.626 "assigned_rate_limits": { 00:12:06.626 "rw_ios_per_sec": 0, 00:12:06.626 "rw_mbytes_per_sec": 0, 00:12:06.626 "r_mbytes_per_sec": 0, 00:12:06.626 "w_mbytes_per_sec": 0 00:12:06.626 }, 00:12:06.626 "claimed": false, 00:12:06.626 "zoned": false, 00:12:06.626 "supported_io_types": { 00:12:06.626 "read": true, 00:12:06.626 "write": true, 00:12:06.626 "unmap": true, 00:12:06.626 "flush": true, 00:12:06.626 "reset": true, 00:12:06.626 "nvme_admin": false, 00:12:06.626 "nvme_io": false, 00:12:06.626 "nvme_io_md": false, 00:12:06.626 "write_zeroes": true, 00:12:06.626 "zcopy": true, 00:12:06.626 "get_zone_info": false, 00:12:06.626 "zone_management": false, 00:12:06.626 "zone_append": false, 00:12:06.626 "compare": false, 00:12:06.626 "compare_and_write": false, 00:12:06.626 "abort": true, 00:12:06.626 "seek_hole": false, 00:12:06.626 "seek_data": false, 00:12:06.626 "copy": true, 00:12:06.626 "nvme_iov_md": false 00:12:06.626 }, 00:12:06.626 "memory_domains": [ 00:12:06.626 { 00:12:06.626 "dma_device_id": "system", 00:12:06.626 "dma_device_type": 1 00:12:06.626 }, 00:12:06.626 { 00:12:06.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.626 "dma_device_type": 2 00:12:06.626 } 00:12:06.626 ], 00:12:06.626 "driver_specific": {} 00:12:06.626 } 00:12:06.626 ] 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.626 BaseBdev4 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.626 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.887 [ 00:12:06.887 { 00:12:06.887 "name": "BaseBdev4", 00:12:06.887 "aliases": [ 00:12:06.887 "4ad455d6-346a-45f4-84c3-2925aeeff8c3" 00:12:06.887 ], 00:12:06.887 "product_name": "Malloc disk", 00:12:06.887 "block_size": 512, 00:12:06.887 "num_blocks": 65536, 00:12:06.887 "uuid": "4ad455d6-346a-45f4-84c3-2925aeeff8c3", 00:12:06.887 "assigned_rate_limits": { 00:12:06.887 "rw_ios_per_sec": 0, 00:12:06.887 "rw_mbytes_per_sec": 0, 00:12:06.887 "r_mbytes_per_sec": 0, 00:12:06.887 "w_mbytes_per_sec": 0 00:12:06.887 }, 00:12:06.887 "claimed": false, 00:12:06.887 "zoned": false, 00:12:06.887 "supported_io_types": { 00:12:06.887 "read": true, 00:12:06.887 "write": true, 00:12:06.887 "unmap": true, 00:12:06.887 "flush": true, 00:12:06.887 "reset": true, 00:12:06.887 "nvme_admin": false, 00:12:06.887 "nvme_io": false, 00:12:06.887 "nvme_io_md": false, 00:12:06.887 "write_zeroes": true, 00:12:06.887 "zcopy": true, 00:12:06.887 "get_zone_info": false, 00:12:06.887 "zone_management": false, 00:12:06.887 "zone_append": false, 00:12:06.887 "compare": false, 00:12:06.887 "compare_and_write": false, 00:12:06.887 "abort": true, 00:12:06.887 "seek_hole": false, 00:12:06.887 "seek_data": false, 00:12:06.887 "copy": true, 00:12:06.887 "nvme_iov_md": false 00:12:06.887 }, 00:12:06.887 "memory_domains": [ 00:12:06.887 { 00:12:06.887 "dma_device_id": "system", 00:12:06.887 "dma_device_type": 1 00:12:06.887 }, 00:12:06.887 { 00:12:06.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.887 "dma_device_type": 2 00:12:06.887 } 00:12:06.887 ], 00:12:06.887 "driver_specific": {} 00:12:06.887 } 00:12:06.887 ] 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.887 [2024-11-18 04:01:03.283960] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:06.887 [2024-11-18 04:01:03.284010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:06.887 [2024-11-18 04:01:03.284030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.887 [2024-11-18 04:01:03.285862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.887 [2024-11-18 04:01:03.285912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.887 "name": "Existed_Raid", 00:12:06.887 "uuid": "8b57ee28-5f98-47c5-b402-c07941576ade", 00:12:06.887 "strip_size_kb": 0, 00:12:06.887 "state": "configuring", 00:12:06.887 "raid_level": "raid1", 00:12:06.887 "superblock": true, 00:12:06.887 "num_base_bdevs": 4, 00:12:06.887 "num_base_bdevs_discovered": 3, 00:12:06.887 "num_base_bdevs_operational": 4, 00:12:06.887 "base_bdevs_list": [ 00:12:06.887 { 00:12:06.887 "name": "BaseBdev1", 00:12:06.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.887 "is_configured": false, 00:12:06.887 "data_offset": 0, 00:12:06.887 "data_size": 0 00:12:06.887 }, 00:12:06.887 { 00:12:06.887 "name": "BaseBdev2", 00:12:06.887 "uuid": "735689b5-127d-4d79-bd4c-80cf0b549016", 00:12:06.887 "is_configured": true, 00:12:06.887 "data_offset": 2048, 00:12:06.887 "data_size": 63488 00:12:06.887 }, 00:12:06.887 { 00:12:06.887 "name": "BaseBdev3", 00:12:06.887 "uuid": "3faafe23-53cb-4468-9af9-f0b269ec9a8c", 00:12:06.887 "is_configured": true, 00:12:06.887 "data_offset": 2048, 00:12:06.887 "data_size": 63488 00:12:06.887 }, 00:12:06.887 { 00:12:06.887 "name": "BaseBdev4", 00:12:06.887 "uuid": "4ad455d6-346a-45f4-84c3-2925aeeff8c3", 00:12:06.887 "is_configured": true, 00:12:06.887 "data_offset": 2048, 00:12:06.887 "data_size": 63488 00:12:06.887 } 00:12:06.887 ] 00:12:06.887 }' 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.887 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.148 [2024-11-18 04:01:03.735219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.148 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.408 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.408 "name": "Existed_Raid", 00:12:07.408 "uuid": "8b57ee28-5f98-47c5-b402-c07941576ade", 00:12:07.408 "strip_size_kb": 0, 00:12:07.408 "state": "configuring", 00:12:07.408 "raid_level": "raid1", 00:12:07.408 "superblock": true, 00:12:07.408 "num_base_bdevs": 4, 00:12:07.408 "num_base_bdevs_discovered": 2, 00:12:07.408 "num_base_bdevs_operational": 4, 00:12:07.408 "base_bdevs_list": [ 00:12:07.408 { 00:12:07.408 "name": "BaseBdev1", 00:12:07.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.408 "is_configured": false, 00:12:07.408 "data_offset": 0, 00:12:07.408 "data_size": 0 00:12:07.408 }, 00:12:07.408 { 00:12:07.408 "name": null, 00:12:07.408 "uuid": "735689b5-127d-4d79-bd4c-80cf0b549016", 00:12:07.408 "is_configured": false, 00:12:07.408 "data_offset": 0, 00:12:07.408 "data_size": 63488 00:12:07.408 }, 00:12:07.408 { 00:12:07.408 "name": "BaseBdev3", 00:12:07.408 "uuid": "3faafe23-53cb-4468-9af9-f0b269ec9a8c", 00:12:07.408 "is_configured": true, 00:12:07.408 "data_offset": 2048, 00:12:07.408 "data_size": 63488 00:12:07.408 }, 00:12:07.408 { 00:12:07.408 "name": "BaseBdev4", 00:12:07.408 "uuid": "4ad455d6-346a-45f4-84c3-2925aeeff8c3", 00:12:07.408 "is_configured": true, 00:12:07.408 "data_offset": 2048, 00:12:07.408 "data_size": 63488 00:12:07.408 } 00:12:07.408 ] 00:12:07.408 }' 00:12:07.408 04:01:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.408 04:01:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 [2024-11-18 04:01:04.205332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.668 BaseBdev1 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.668 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 [ 00:12:07.668 { 00:12:07.668 "name": "BaseBdev1", 00:12:07.668 "aliases": [ 00:12:07.668 "2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc" 00:12:07.668 ], 00:12:07.668 "product_name": "Malloc disk", 00:12:07.668 "block_size": 512, 00:12:07.668 "num_blocks": 65536, 00:12:07.668 "uuid": "2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc", 00:12:07.668 "assigned_rate_limits": { 00:12:07.668 "rw_ios_per_sec": 0, 00:12:07.668 "rw_mbytes_per_sec": 0, 00:12:07.668 "r_mbytes_per_sec": 0, 00:12:07.668 "w_mbytes_per_sec": 0 00:12:07.668 }, 00:12:07.668 "claimed": true, 00:12:07.668 "claim_type": "exclusive_write", 00:12:07.668 "zoned": false, 00:12:07.668 "supported_io_types": { 00:12:07.668 "read": true, 00:12:07.668 "write": true, 00:12:07.668 "unmap": true, 00:12:07.668 "flush": true, 00:12:07.668 "reset": true, 00:12:07.668 "nvme_admin": false, 00:12:07.668 "nvme_io": false, 00:12:07.668 "nvme_io_md": false, 00:12:07.668 "write_zeroes": true, 00:12:07.668 "zcopy": true, 00:12:07.668 "get_zone_info": false, 00:12:07.668 "zone_management": false, 00:12:07.668 "zone_append": false, 00:12:07.668 "compare": false, 00:12:07.668 "compare_and_write": false, 00:12:07.668 "abort": true, 00:12:07.668 "seek_hole": false, 00:12:07.668 "seek_data": false, 00:12:07.668 "copy": true, 00:12:07.668 "nvme_iov_md": false 00:12:07.668 }, 00:12:07.668 "memory_domains": [ 00:12:07.668 { 00:12:07.668 "dma_device_id": "system", 00:12:07.668 "dma_device_type": 1 00:12:07.668 }, 00:12:07.668 { 00:12:07.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.668 "dma_device_type": 2 00:12:07.669 } 00:12:07.669 ], 00:12:07.669 "driver_specific": {} 00:12:07.669 } 00:12:07.669 ] 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.669 "name": "Existed_Raid", 00:12:07.669 "uuid": "8b57ee28-5f98-47c5-b402-c07941576ade", 00:12:07.669 "strip_size_kb": 0, 00:12:07.669 "state": "configuring", 00:12:07.669 "raid_level": "raid1", 00:12:07.669 "superblock": true, 00:12:07.669 "num_base_bdevs": 4, 00:12:07.669 "num_base_bdevs_discovered": 3, 00:12:07.669 "num_base_bdevs_operational": 4, 00:12:07.669 "base_bdevs_list": [ 00:12:07.669 { 00:12:07.669 "name": "BaseBdev1", 00:12:07.669 "uuid": "2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc", 00:12:07.669 "is_configured": true, 00:12:07.669 "data_offset": 2048, 00:12:07.669 "data_size": 63488 00:12:07.669 }, 00:12:07.669 { 00:12:07.669 "name": null, 00:12:07.669 "uuid": "735689b5-127d-4d79-bd4c-80cf0b549016", 00:12:07.669 "is_configured": false, 00:12:07.669 "data_offset": 0, 00:12:07.669 "data_size": 63488 00:12:07.669 }, 00:12:07.669 { 00:12:07.669 "name": "BaseBdev3", 00:12:07.669 "uuid": "3faafe23-53cb-4468-9af9-f0b269ec9a8c", 00:12:07.669 "is_configured": true, 00:12:07.669 "data_offset": 2048, 00:12:07.669 "data_size": 63488 00:12:07.669 }, 00:12:07.669 { 00:12:07.669 "name": "BaseBdev4", 00:12:07.669 "uuid": "4ad455d6-346a-45f4-84c3-2925aeeff8c3", 00:12:07.669 "is_configured": true, 00:12:07.669 "data_offset": 2048, 00:12:07.669 "data_size": 63488 00:12:07.669 } 00:12:07.669 ] 00:12:07.669 }' 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.669 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.239 [2024-11-18 04:01:04.744472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.239 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.240 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.240 "name": "Existed_Raid", 00:12:08.240 "uuid": "8b57ee28-5f98-47c5-b402-c07941576ade", 00:12:08.240 "strip_size_kb": 0, 00:12:08.240 "state": "configuring", 00:12:08.240 "raid_level": "raid1", 00:12:08.240 "superblock": true, 00:12:08.240 "num_base_bdevs": 4, 00:12:08.240 "num_base_bdevs_discovered": 2, 00:12:08.240 "num_base_bdevs_operational": 4, 00:12:08.240 "base_bdevs_list": [ 00:12:08.240 { 00:12:08.240 "name": "BaseBdev1", 00:12:08.240 "uuid": "2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc", 00:12:08.240 "is_configured": true, 00:12:08.240 "data_offset": 2048, 00:12:08.240 "data_size": 63488 00:12:08.240 }, 00:12:08.240 { 00:12:08.240 "name": null, 00:12:08.240 "uuid": "735689b5-127d-4d79-bd4c-80cf0b549016", 00:12:08.240 "is_configured": false, 00:12:08.240 "data_offset": 0, 00:12:08.240 "data_size": 63488 00:12:08.240 }, 00:12:08.240 { 00:12:08.240 "name": null, 00:12:08.240 "uuid": "3faafe23-53cb-4468-9af9-f0b269ec9a8c", 00:12:08.240 "is_configured": false, 00:12:08.240 "data_offset": 0, 00:12:08.240 "data_size": 63488 00:12:08.240 }, 00:12:08.240 { 00:12:08.240 "name": "BaseBdev4", 00:12:08.240 "uuid": "4ad455d6-346a-45f4-84c3-2925aeeff8c3", 00:12:08.240 "is_configured": true, 00:12:08.240 "data_offset": 2048, 00:12:08.240 "data_size": 63488 00:12:08.240 } 00:12:08.240 ] 00:12:08.240 }' 00:12:08.240 04:01:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.240 04:01:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.499 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.500 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.500 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.500 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.759 [2024-11-18 04:01:05.179744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.759 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.759 "name": "Existed_Raid", 00:12:08.759 "uuid": "8b57ee28-5f98-47c5-b402-c07941576ade", 00:12:08.759 "strip_size_kb": 0, 00:12:08.759 "state": "configuring", 00:12:08.759 "raid_level": "raid1", 00:12:08.759 "superblock": true, 00:12:08.759 "num_base_bdevs": 4, 00:12:08.759 "num_base_bdevs_discovered": 3, 00:12:08.759 "num_base_bdevs_operational": 4, 00:12:08.759 "base_bdevs_list": [ 00:12:08.759 { 00:12:08.759 "name": "BaseBdev1", 00:12:08.759 "uuid": "2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc", 00:12:08.759 "is_configured": true, 00:12:08.759 "data_offset": 2048, 00:12:08.759 "data_size": 63488 00:12:08.759 }, 00:12:08.759 { 00:12:08.759 "name": null, 00:12:08.759 "uuid": "735689b5-127d-4d79-bd4c-80cf0b549016", 00:12:08.759 "is_configured": false, 00:12:08.759 "data_offset": 0, 00:12:08.759 "data_size": 63488 00:12:08.759 }, 00:12:08.760 { 00:12:08.760 "name": "BaseBdev3", 00:12:08.760 "uuid": "3faafe23-53cb-4468-9af9-f0b269ec9a8c", 00:12:08.760 "is_configured": true, 00:12:08.760 "data_offset": 2048, 00:12:08.760 "data_size": 63488 00:12:08.760 }, 00:12:08.760 { 00:12:08.760 "name": "BaseBdev4", 00:12:08.760 "uuid": "4ad455d6-346a-45f4-84c3-2925aeeff8c3", 00:12:08.760 "is_configured": true, 00:12:08.760 "data_offset": 2048, 00:12:08.760 "data_size": 63488 00:12:08.760 } 00:12:08.760 ] 00:12:08.760 }' 00:12:08.760 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.760 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.020 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.020 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.020 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.020 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.020 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.020 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:09.020 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:09.020 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.020 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.020 [2024-11-18 04:01:05.639021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.280 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.280 "name": "Existed_Raid", 00:12:09.280 "uuid": "8b57ee28-5f98-47c5-b402-c07941576ade", 00:12:09.280 "strip_size_kb": 0, 00:12:09.280 "state": "configuring", 00:12:09.280 "raid_level": "raid1", 00:12:09.280 "superblock": true, 00:12:09.280 "num_base_bdevs": 4, 00:12:09.280 "num_base_bdevs_discovered": 2, 00:12:09.280 "num_base_bdevs_operational": 4, 00:12:09.280 "base_bdevs_list": [ 00:12:09.280 { 00:12:09.280 "name": null, 00:12:09.280 "uuid": "2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc", 00:12:09.281 "is_configured": false, 00:12:09.281 "data_offset": 0, 00:12:09.281 "data_size": 63488 00:12:09.281 }, 00:12:09.281 { 00:12:09.281 "name": null, 00:12:09.281 "uuid": "735689b5-127d-4d79-bd4c-80cf0b549016", 00:12:09.281 "is_configured": false, 00:12:09.281 "data_offset": 0, 00:12:09.281 "data_size": 63488 00:12:09.281 }, 00:12:09.281 { 00:12:09.281 "name": "BaseBdev3", 00:12:09.281 "uuid": "3faafe23-53cb-4468-9af9-f0b269ec9a8c", 00:12:09.281 "is_configured": true, 00:12:09.281 "data_offset": 2048, 00:12:09.281 "data_size": 63488 00:12:09.281 }, 00:12:09.281 { 00:12:09.281 "name": "BaseBdev4", 00:12:09.281 "uuid": "4ad455d6-346a-45f4-84c3-2925aeeff8c3", 00:12:09.281 "is_configured": true, 00:12:09.281 "data_offset": 2048, 00:12:09.281 "data_size": 63488 00:12:09.281 } 00:12:09.281 ] 00:12:09.281 }' 00:12:09.281 04:01:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.281 04:01:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.541 [2024-11-18 04:01:06.171208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.541 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.800 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.800 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.800 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.800 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.800 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.800 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.800 "name": "Existed_Raid", 00:12:09.800 "uuid": "8b57ee28-5f98-47c5-b402-c07941576ade", 00:12:09.800 "strip_size_kb": 0, 00:12:09.800 "state": "configuring", 00:12:09.800 "raid_level": "raid1", 00:12:09.800 "superblock": true, 00:12:09.800 "num_base_bdevs": 4, 00:12:09.800 "num_base_bdevs_discovered": 3, 00:12:09.800 "num_base_bdevs_operational": 4, 00:12:09.800 "base_bdevs_list": [ 00:12:09.800 { 00:12:09.800 "name": null, 00:12:09.800 "uuid": "2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc", 00:12:09.800 "is_configured": false, 00:12:09.800 "data_offset": 0, 00:12:09.800 "data_size": 63488 00:12:09.800 }, 00:12:09.800 { 00:12:09.800 "name": "BaseBdev2", 00:12:09.800 "uuid": "735689b5-127d-4d79-bd4c-80cf0b549016", 00:12:09.800 "is_configured": true, 00:12:09.800 "data_offset": 2048, 00:12:09.800 "data_size": 63488 00:12:09.800 }, 00:12:09.800 { 00:12:09.800 "name": "BaseBdev3", 00:12:09.800 "uuid": "3faafe23-53cb-4468-9af9-f0b269ec9a8c", 00:12:09.800 "is_configured": true, 00:12:09.800 "data_offset": 2048, 00:12:09.800 "data_size": 63488 00:12:09.800 }, 00:12:09.800 { 00:12:09.800 "name": "BaseBdev4", 00:12:09.800 "uuid": "4ad455d6-346a-45f4-84c3-2925aeeff8c3", 00:12:09.800 "is_configured": true, 00:12:09.800 "data_offset": 2048, 00:12:09.800 "data_size": 63488 00:12:09.800 } 00:12:09.800 ] 00:12:09.800 }' 00:12:09.800 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.800 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.061 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.061 [2024-11-18 04:01:06.698502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:10.061 [2024-11-18 04:01:06.698787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:10.061 [2024-11-18 04:01:06.698815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.322 NewBaseBdev 00:12:10.322 [2024-11-18 04:01:06.699131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:10.322 [2024-11-18 04:01:06.699334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:10.322 [2024-11-18 04:01:06.699347] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:10.322 [2024-11-18 04:01:06.699507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.322 [ 00:12:10.322 { 00:12:10.322 "name": "NewBaseBdev", 00:12:10.322 "aliases": [ 00:12:10.322 "2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc" 00:12:10.322 ], 00:12:10.322 "product_name": "Malloc disk", 00:12:10.322 "block_size": 512, 00:12:10.322 "num_blocks": 65536, 00:12:10.322 "uuid": "2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc", 00:12:10.322 "assigned_rate_limits": { 00:12:10.322 "rw_ios_per_sec": 0, 00:12:10.322 "rw_mbytes_per_sec": 0, 00:12:10.322 "r_mbytes_per_sec": 0, 00:12:10.322 "w_mbytes_per_sec": 0 00:12:10.322 }, 00:12:10.322 "claimed": true, 00:12:10.322 "claim_type": "exclusive_write", 00:12:10.322 "zoned": false, 00:12:10.322 "supported_io_types": { 00:12:10.322 "read": true, 00:12:10.322 "write": true, 00:12:10.322 "unmap": true, 00:12:10.322 "flush": true, 00:12:10.322 "reset": true, 00:12:10.322 "nvme_admin": false, 00:12:10.322 "nvme_io": false, 00:12:10.322 "nvme_io_md": false, 00:12:10.322 "write_zeroes": true, 00:12:10.322 "zcopy": true, 00:12:10.322 "get_zone_info": false, 00:12:10.322 "zone_management": false, 00:12:10.322 "zone_append": false, 00:12:10.322 "compare": false, 00:12:10.322 "compare_and_write": false, 00:12:10.322 "abort": true, 00:12:10.322 "seek_hole": false, 00:12:10.322 "seek_data": false, 00:12:10.322 "copy": true, 00:12:10.322 "nvme_iov_md": false 00:12:10.322 }, 00:12:10.322 "memory_domains": [ 00:12:10.322 { 00:12:10.322 "dma_device_id": "system", 00:12:10.322 "dma_device_type": 1 00:12:10.322 }, 00:12:10.322 { 00:12:10.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.322 "dma_device_type": 2 00:12:10.322 } 00:12:10.322 ], 00:12:10.322 "driver_specific": {} 00:12:10.322 } 00:12:10.322 ] 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.322 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.323 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.323 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.323 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.323 "name": "Existed_Raid", 00:12:10.323 "uuid": "8b57ee28-5f98-47c5-b402-c07941576ade", 00:12:10.323 "strip_size_kb": 0, 00:12:10.323 "state": "online", 00:12:10.323 "raid_level": "raid1", 00:12:10.323 "superblock": true, 00:12:10.323 "num_base_bdevs": 4, 00:12:10.323 "num_base_bdevs_discovered": 4, 00:12:10.323 "num_base_bdevs_operational": 4, 00:12:10.323 "base_bdevs_list": [ 00:12:10.323 { 00:12:10.323 "name": "NewBaseBdev", 00:12:10.323 "uuid": "2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc", 00:12:10.323 "is_configured": true, 00:12:10.323 "data_offset": 2048, 00:12:10.323 "data_size": 63488 00:12:10.323 }, 00:12:10.323 { 00:12:10.323 "name": "BaseBdev2", 00:12:10.323 "uuid": "735689b5-127d-4d79-bd4c-80cf0b549016", 00:12:10.323 "is_configured": true, 00:12:10.323 "data_offset": 2048, 00:12:10.323 "data_size": 63488 00:12:10.323 }, 00:12:10.323 { 00:12:10.323 "name": "BaseBdev3", 00:12:10.323 "uuid": "3faafe23-53cb-4468-9af9-f0b269ec9a8c", 00:12:10.323 "is_configured": true, 00:12:10.323 "data_offset": 2048, 00:12:10.323 "data_size": 63488 00:12:10.323 }, 00:12:10.323 { 00:12:10.323 "name": "BaseBdev4", 00:12:10.323 "uuid": "4ad455d6-346a-45f4-84c3-2925aeeff8c3", 00:12:10.323 "is_configured": true, 00:12:10.323 "data_offset": 2048, 00:12:10.323 "data_size": 63488 00:12:10.323 } 00:12:10.323 ] 00:12:10.323 }' 00:12:10.323 04:01:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.323 04:01:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:10.582 [2024-11-18 04:01:07.146213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:10.582 "name": "Existed_Raid", 00:12:10.582 "aliases": [ 00:12:10.582 "8b57ee28-5f98-47c5-b402-c07941576ade" 00:12:10.582 ], 00:12:10.582 "product_name": "Raid Volume", 00:12:10.582 "block_size": 512, 00:12:10.582 "num_blocks": 63488, 00:12:10.582 "uuid": "8b57ee28-5f98-47c5-b402-c07941576ade", 00:12:10.582 "assigned_rate_limits": { 00:12:10.582 "rw_ios_per_sec": 0, 00:12:10.582 "rw_mbytes_per_sec": 0, 00:12:10.582 "r_mbytes_per_sec": 0, 00:12:10.582 "w_mbytes_per_sec": 0 00:12:10.582 }, 00:12:10.582 "claimed": false, 00:12:10.582 "zoned": false, 00:12:10.582 "supported_io_types": { 00:12:10.582 "read": true, 00:12:10.582 "write": true, 00:12:10.582 "unmap": false, 00:12:10.582 "flush": false, 00:12:10.582 "reset": true, 00:12:10.582 "nvme_admin": false, 00:12:10.582 "nvme_io": false, 00:12:10.582 "nvme_io_md": false, 00:12:10.582 "write_zeroes": true, 00:12:10.582 "zcopy": false, 00:12:10.582 "get_zone_info": false, 00:12:10.582 "zone_management": false, 00:12:10.582 "zone_append": false, 00:12:10.582 "compare": false, 00:12:10.582 "compare_and_write": false, 00:12:10.582 "abort": false, 00:12:10.582 "seek_hole": false, 00:12:10.582 "seek_data": false, 00:12:10.582 "copy": false, 00:12:10.582 "nvme_iov_md": false 00:12:10.582 }, 00:12:10.582 "memory_domains": [ 00:12:10.582 { 00:12:10.582 "dma_device_id": "system", 00:12:10.582 "dma_device_type": 1 00:12:10.582 }, 00:12:10.582 { 00:12:10.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.582 "dma_device_type": 2 00:12:10.582 }, 00:12:10.582 { 00:12:10.582 "dma_device_id": "system", 00:12:10.582 "dma_device_type": 1 00:12:10.582 }, 00:12:10.582 { 00:12:10.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.582 "dma_device_type": 2 00:12:10.582 }, 00:12:10.582 { 00:12:10.582 "dma_device_id": "system", 00:12:10.582 "dma_device_type": 1 00:12:10.582 }, 00:12:10.582 { 00:12:10.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.582 "dma_device_type": 2 00:12:10.582 }, 00:12:10.582 { 00:12:10.582 "dma_device_id": "system", 00:12:10.582 "dma_device_type": 1 00:12:10.582 }, 00:12:10.582 { 00:12:10.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.582 "dma_device_type": 2 00:12:10.582 } 00:12:10.582 ], 00:12:10.582 "driver_specific": { 00:12:10.582 "raid": { 00:12:10.582 "uuid": "8b57ee28-5f98-47c5-b402-c07941576ade", 00:12:10.582 "strip_size_kb": 0, 00:12:10.582 "state": "online", 00:12:10.582 "raid_level": "raid1", 00:12:10.582 "superblock": true, 00:12:10.582 "num_base_bdevs": 4, 00:12:10.582 "num_base_bdevs_discovered": 4, 00:12:10.582 "num_base_bdevs_operational": 4, 00:12:10.582 "base_bdevs_list": [ 00:12:10.582 { 00:12:10.582 "name": "NewBaseBdev", 00:12:10.582 "uuid": "2d7f949e-6b57-48de-b9fe-93ad9c2cf0bc", 00:12:10.582 "is_configured": true, 00:12:10.582 "data_offset": 2048, 00:12:10.582 "data_size": 63488 00:12:10.582 }, 00:12:10.582 { 00:12:10.582 "name": "BaseBdev2", 00:12:10.582 "uuid": "735689b5-127d-4d79-bd4c-80cf0b549016", 00:12:10.582 "is_configured": true, 00:12:10.582 "data_offset": 2048, 00:12:10.582 "data_size": 63488 00:12:10.582 }, 00:12:10.582 { 00:12:10.582 "name": "BaseBdev3", 00:12:10.582 "uuid": "3faafe23-53cb-4468-9af9-f0b269ec9a8c", 00:12:10.582 "is_configured": true, 00:12:10.582 "data_offset": 2048, 00:12:10.582 "data_size": 63488 00:12:10.582 }, 00:12:10.582 { 00:12:10.582 "name": "BaseBdev4", 00:12:10.582 "uuid": "4ad455d6-346a-45f4-84c3-2925aeeff8c3", 00:12:10.582 "is_configured": true, 00:12:10.582 "data_offset": 2048, 00:12:10.582 "data_size": 63488 00:12:10.582 } 00:12:10.582 ] 00:12:10.582 } 00:12:10.582 } 00:12:10.582 }' 00:12:10.582 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:10.842 BaseBdev2 00:12:10.842 BaseBdev3 00:12:10.842 BaseBdev4' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.842 [2024-11-18 04:01:07.465254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.842 [2024-11-18 04:01:07.465300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.842 [2024-11-18 04:01:07.465395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.842 [2024-11-18 04:01:07.465705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.842 [2024-11-18 04:01:07.465727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73840 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73840 ']' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73840 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.842 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73840 00:12:11.101 killing process with pid 73840 00:12:11.101 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.101 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.101 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73840' 00:12:11.101 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73840 00:12:11.101 [2024-11-18 04:01:07.507072] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.101 04:01:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73840 00:12:11.360 [2024-11-18 04:01:07.928835] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.740 04:01:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:12.740 00:12:12.740 real 0m11.425s 00:12:12.740 user 0m17.924s 00:12:12.740 sys 0m2.082s 00:12:12.740 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.740 04:01:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.740 ************************************ 00:12:12.740 END TEST raid_state_function_test_sb 00:12:12.740 ************************************ 00:12:12.740 04:01:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:12.740 04:01:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:12.740 04:01:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.740 04:01:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.740 ************************************ 00:12:12.740 START TEST raid_superblock_test 00:12:12.740 ************************************ 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74505 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74505 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74505 ']' 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.740 04:01:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.740 [2024-11-18 04:01:09.277310] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:12.740 [2024-11-18 04:01:09.277432] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74505 ] 00:12:13.000 [2024-11-18 04:01:09.452053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.000 [2024-11-18 04:01:09.593765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.258 [2024-11-18 04:01:09.827747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.258 [2024-11-18 04:01:09.827805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.517 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.776 malloc1 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.776 [2024-11-18 04:01:10.170288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:13.776 [2024-11-18 04:01:10.170376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.776 [2024-11-18 04:01:10.170405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:13.776 [2024-11-18 04:01:10.170415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.776 [2024-11-18 04:01:10.172973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.776 [2024-11-18 04:01:10.173007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:13.776 pt1 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.776 malloc2 00:12:13.776 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.777 [2024-11-18 04:01:10.231292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:13.777 [2024-11-18 04:01:10.231442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.777 [2024-11-18 04:01:10.231482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:13.777 [2024-11-18 04:01:10.231509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.777 [2024-11-18 04:01:10.233833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.777 [2024-11-18 04:01:10.233917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:13.777 pt2 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.777 malloc3 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.777 [2024-11-18 04:01:10.300988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:13.777 [2024-11-18 04:01:10.301085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.777 [2024-11-18 04:01:10.301124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:13.777 [2024-11-18 04:01:10.301151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.777 [2024-11-18 04:01:10.303389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.777 [2024-11-18 04:01:10.303461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:13.777 pt3 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.777 malloc4 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.777 [2024-11-18 04:01:10.367126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:13.777 [2024-11-18 04:01:10.367223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.777 [2024-11-18 04:01:10.367257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:13.777 [2024-11-18 04:01:10.367284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.777 [2024-11-18 04:01:10.369634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.777 [2024-11-18 04:01:10.369705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:13.777 pt4 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.777 [2024-11-18 04:01:10.379133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:13.777 [2024-11-18 04:01:10.381279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:13.777 [2024-11-18 04:01:10.381347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:13.777 [2024-11-18 04:01:10.381388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:13.777 [2024-11-18 04:01:10.381607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:13.777 [2024-11-18 04:01:10.381633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.777 [2024-11-18 04:01:10.381942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:13.777 [2024-11-18 04:01:10.382132] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:13.777 [2024-11-18 04:01:10.382158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:13.777 [2024-11-18 04:01:10.382313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.777 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.036 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.036 "name": "raid_bdev1", 00:12:14.036 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:14.036 "strip_size_kb": 0, 00:12:14.036 "state": "online", 00:12:14.036 "raid_level": "raid1", 00:12:14.036 "superblock": true, 00:12:14.036 "num_base_bdevs": 4, 00:12:14.036 "num_base_bdevs_discovered": 4, 00:12:14.036 "num_base_bdevs_operational": 4, 00:12:14.036 "base_bdevs_list": [ 00:12:14.036 { 00:12:14.036 "name": "pt1", 00:12:14.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.036 "is_configured": true, 00:12:14.036 "data_offset": 2048, 00:12:14.036 "data_size": 63488 00:12:14.036 }, 00:12:14.036 { 00:12:14.036 "name": "pt2", 00:12:14.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.036 "is_configured": true, 00:12:14.036 "data_offset": 2048, 00:12:14.036 "data_size": 63488 00:12:14.036 }, 00:12:14.036 { 00:12:14.036 "name": "pt3", 00:12:14.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.036 "is_configured": true, 00:12:14.036 "data_offset": 2048, 00:12:14.036 "data_size": 63488 00:12:14.036 }, 00:12:14.036 { 00:12:14.036 "name": "pt4", 00:12:14.036 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:14.036 "is_configured": true, 00:12:14.036 "data_offset": 2048, 00:12:14.036 "data_size": 63488 00:12:14.036 } 00:12:14.036 ] 00:12:14.036 }' 00:12:14.036 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.036 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.296 [2024-11-18 04:01:10.850717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.296 "name": "raid_bdev1", 00:12:14.296 "aliases": [ 00:12:14.296 "b60f1185-77e7-48dc-8022-5984ddd5e58e" 00:12:14.296 ], 00:12:14.296 "product_name": "Raid Volume", 00:12:14.296 "block_size": 512, 00:12:14.296 "num_blocks": 63488, 00:12:14.296 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:14.296 "assigned_rate_limits": { 00:12:14.296 "rw_ios_per_sec": 0, 00:12:14.296 "rw_mbytes_per_sec": 0, 00:12:14.296 "r_mbytes_per_sec": 0, 00:12:14.296 "w_mbytes_per_sec": 0 00:12:14.296 }, 00:12:14.296 "claimed": false, 00:12:14.296 "zoned": false, 00:12:14.296 "supported_io_types": { 00:12:14.296 "read": true, 00:12:14.296 "write": true, 00:12:14.296 "unmap": false, 00:12:14.296 "flush": false, 00:12:14.296 "reset": true, 00:12:14.296 "nvme_admin": false, 00:12:14.296 "nvme_io": false, 00:12:14.296 "nvme_io_md": false, 00:12:14.296 "write_zeroes": true, 00:12:14.296 "zcopy": false, 00:12:14.296 "get_zone_info": false, 00:12:14.296 "zone_management": false, 00:12:14.296 "zone_append": false, 00:12:14.296 "compare": false, 00:12:14.296 "compare_and_write": false, 00:12:14.296 "abort": false, 00:12:14.296 "seek_hole": false, 00:12:14.296 "seek_data": false, 00:12:14.296 "copy": false, 00:12:14.296 "nvme_iov_md": false 00:12:14.296 }, 00:12:14.296 "memory_domains": [ 00:12:14.296 { 00:12:14.296 "dma_device_id": "system", 00:12:14.296 "dma_device_type": 1 00:12:14.296 }, 00:12:14.296 { 00:12:14.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.296 "dma_device_type": 2 00:12:14.296 }, 00:12:14.296 { 00:12:14.296 "dma_device_id": "system", 00:12:14.296 "dma_device_type": 1 00:12:14.296 }, 00:12:14.296 { 00:12:14.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.296 "dma_device_type": 2 00:12:14.296 }, 00:12:14.296 { 00:12:14.296 "dma_device_id": "system", 00:12:14.296 "dma_device_type": 1 00:12:14.296 }, 00:12:14.296 { 00:12:14.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.296 "dma_device_type": 2 00:12:14.296 }, 00:12:14.296 { 00:12:14.296 "dma_device_id": "system", 00:12:14.296 "dma_device_type": 1 00:12:14.296 }, 00:12:14.296 { 00:12:14.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.296 "dma_device_type": 2 00:12:14.296 } 00:12:14.296 ], 00:12:14.296 "driver_specific": { 00:12:14.296 "raid": { 00:12:14.296 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:14.296 "strip_size_kb": 0, 00:12:14.296 "state": "online", 00:12:14.296 "raid_level": "raid1", 00:12:14.296 "superblock": true, 00:12:14.296 "num_base_bdevs": 4, 00:12:14.296 "num_base_bdevs_discovered": 4, 00:12:14.296 "num_base_bdevs_operational": 4, 00:12:14.296 "base_bdevs_list": [ 00:12:14.296 { 00:12:14.296 "name": "pt1", 00:12:14.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.296 "is_configured": true, 00:12:14.296 "data_offset": 2048, 00:12:14.296 "data_size": 63488 00:12:14.296 }, 00:12:14.296 { 00:12:14.296 "name": "pt2", 00:12:14.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.296 "is_configured": true, 00:12:14.296 "data_offset": 2048, 00:12:14.296 "data_size": 63488 00:12:14.296 }, 00:12:14.296 { 00:12:14.296 "name": "pt3", 00:12:14.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.296 "is_configured": true, 00:12:14.296 "data_offset": 2048, 00:12:14.296 "data_size": 63488 00:12:14.296 }, 00:12:14.296 { 00:12:14.296 "name": "pt4", 00:12:14.296 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:14.296 "is_configured": true, 00:12:14.296 "data_offset": 2048, 00:12:14.296 "data_size": 63488 00:12:14.296 } 00:12:14.296 ] 00:12:14.296 } 00:12:14.296 } 00:12:14.296 }' 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:14.296 pt2 00:12:14.296 pt3 00:12:14.296 pt4' 00:12:14.296 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.556 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:14.556 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.556 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:14.556 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.556 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.556 04:01:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.556 04:01:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.556 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.556 [2024-11-18 04:01:11.182006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b60f1185-77e7-48dc-8022-5984ddd5e58e 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b60f1185-77e7-48dc-8022-5984ddd5e58e ']' 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.816 [2024-11-18 04:01:11.213704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.816 [2024-11-18 04:01:11.213815] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.816 [2024-11-18 04:01:11.213929] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.816 [2024-11-18 04:01:11.214026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.816 [2024-11-18 04:01:11.214043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.816 [2024-11-18 04:01:11.369486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:14.816 [2024-11-18 04:01:11.371748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:14.816 [2024-11-18 04:01:11.371869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:14.816 [2024-11-18 04:01:11.371926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:14.816 [2024-11-18 04:01:11.372012] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:14.816 [2024-11-18 04:01:11.372117] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:14.816 [2024-11-18 04:01:11.372179] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:14.816 [2024-11-18 04:01:11.372231] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:14.816 [2024-11-18 04:01:11.372279] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.816 [2024-11-18 04:01:11.372313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:14.816 request: 00:12:14.816 { 00:12:14.816 "name": "raid_bdev1", 00:12:14.816 "raid_level": "raid1", 00:12:14.816 "base_bdevs": [ 00:12:14.816 "malloc1", 00:12:14.816 "malloc2", 00:12:14.816 "malloc3", 00:12:14.816 "malloc4" 00:12:14.816 ], 00:12:14.816 "superblock": false, 00:12:14.816 "method": "bdev_raid_create", 00:12:14.816 "req_id": 1 00:12:14.816 } 00:12:14.816 Got JSON-RPC error response 00:12:14.816 response: 00:12:14.816 { 00:12:14.816 "code": -17, 00:12:14.816 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:14.816 } 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:14.816 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.817 [2024-11-18 04:01:11.441282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:14.817 [2024-11-18 04:01:11.441415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.817 [2024-11-18 04:01:11.441448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:14.817 [2024-11-18 04:01:11.441481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.817 [2024-11-18 04:01:11.443973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.817 [2024-11-18 04:01:11.444049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:14.817 [2024-11-18 04:01:11.444165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:14.817 [2024-11-18 04:01:11.444266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:14.817 pt1 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.817 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.076 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.076 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.076 "name": "raid_bdev1", 00:12:15.076 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:15.076 "strip_size_kb": 0, 00:12:15.076 "state": "configuring", 00:12:15.076 "raid_level": "raid1", 00:12:15.076 "superblock": true, 00:12:15.076 "num_base_bdevs": 4, 00:12:15.076 "num_base_bdevs_discovered": 1, 00:12:15.076 "num_base_bdevs_operational": 4, 00:12:15.076 "base_bdevs_list": [ 00:12:15.076 { 00:12:15.076 "name": "pt1", 00:12:15.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.076 "is_configured": true, 00:12:15.076 "data_offset": 2048, 00:12:15.076 "data_size": 63488 00:12:15.076 }, 00:12:15.076 { 00:12:15.076 "name": null, 00:12:15.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.076 "is_configured": false, 00:12:15.076 "data_offset": 2048, 00:12:15.076 "data_size": 63488 00:12:15.076 }, 00:12:15.076 { 00:12:15.076 "name": null, 00:12:15.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.076 "is_configured": false, 00:12:15.076 "data_offset": 2048, 00:12:15.076 "data_size": 63488 00:12:15.076 }, 00:12:15.076 { 00:12:15.076 "name": null, 00:12:15.076 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:15.076 "is_configured": false, 00:12:15.076 "data_offset": 2048, 00:12:15.076 "data_size": 63488 00:12:15.076 } 00:12:15.076 ] 00:12:15.076 }' 00:12:15.076 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.076 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.335 [2024-11-18 04:01:11.936517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:15.335 [2024-11-18 04:01:11.936701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.335 [2024-11-18 04:01:11.936742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:15.335 [2024-11-18 04:01:11.936805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.335 [2024-11-18 04:01:11.937374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.335 [2024-11-18 04:01:11.937444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:15.335 [2024-11-18 04:01:11.937578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:15.335 [2024-11-18 04:01:11.937648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:15.335 pt2 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.335 [2024-11-18 04:01:11.948443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.335 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.336 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.336 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.336 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.336 04:01:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.594 04:01:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.594 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.594 "name": "raid_bdev1", 00:12:15.594 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:15.594 "strip_size_kb": 0, 00:12:15.594 "state": "configuring", 00:12:15.594 "raid_level": "raid1", 00:12:15.594 "superblock": true, 00:12:15.594 "num_base_bdevs": 4, 00:12:15.594 "num_base_bdevs_discovered": 1, 00:12:15.594 "num_base_bdevs_operational": 4, 00:12:15.594 "base_bdevs_list": [ 00:12:15.594 { 00:12:15.594 "name": "pt1", 00:12:15.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.594 "is_configured": true, 00:12:15.595 "data_offset": 2048, 00:12:15.595 "data_size": 63488 00:12:15.595 }, 00:12:15.595 { 00:12:15.595 "name": null, 00:12:15.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.595 "is_configured": false, 00:12:15.595 "data_offset": 0, 00:12:15.595 "data_size": 63488 00:12:15.595 }, 00:12:15.595 { 00:12:15.595 "name": null, 00:12:15.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.595 "is_configured": false, 00:12:15.595 "data_offset": 2048, 00:12:15.595 "data_size": 63488 00:12:15.595 }, 00:12:15.595 { 00:12:15.595 "name": null, 00:12:15.595 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:15.595 "is_configured": false, 00:12:15.595 "data_offset": 2048, 00:12:15.595 "data_size": 63488 00:12:15.595 } 00:12:15.595 ] 00:12:15.595 }' 00:12:15.595 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.595 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.854 [2024-11-18 04:01:12.411735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:15.854 [2024-11-18 04:01:12.411914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.854 [2024-11-18 04:01:12.411951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:15.854 [2024-11-18 04:01:12.411963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.854 [2024-11-18 04:01:12.412491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.854 [2024-11-18 04:01:12.412516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:15.854 [2024-11-18 04:01:12.412617] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:15.854 [2024-11-18 04:01:12.412649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:15.854 pt2 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.854 [2024-11-18 04:01:12.423671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:15.854 [2024-11-18 04:01:12.423766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.854 [2024-11-18 04:01:12.423791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:15.854 [2024-11-18 04:01:12.423799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.854 [2024-11-18 04:01:12.424214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.854 [2024-11-18 04:01:12.424239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:15.854 [2024-11-18 04:01:12.424311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:15.854 [2024-11-18 04:01:12.424330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:15.854 pt3 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.854 [2024-11-18 04:01:12.435610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:15.854 [2024-11-18 04:01:12.435688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.854 [2024-11-18 04:01:12.435721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:15.854 [2024-11-18 04:01:12.435746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.854 [2024-11-18 04:01:12.436134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.854 [2024-11-18 04:01:12.436193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:15.854 [2024-11-18 04:01:12.436275] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:15.854 [2024-11-18 04:01:12.436318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:15.854 [2024-11-18 04:01:12.436490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:15.854 [2024-11-18 04:01:12.436528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:15.854 [2024-11-18 04:01:12.436821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:15.854 [2024-11-18 04:01:12.437017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:15.854 [2024-11-18 04:01:12.437077] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:15.854 [2024-11-18 04:01:12.437256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.854 pt4 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.854 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.855 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.855 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.855 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.855 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.855 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.855 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.855 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.855 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.855 "name": "raid_bdev1", 00:12:15.855 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:15.855 "strip_size_kb": 0, 00:12:15.855 "state": "online", 00:12:15.855 "raid_level": "raid1", 00:12:15.855 "superblock": true, 00:12:15.855 "num_base_bdevs": 4, 00:12:15.855 "num_base_bdevs_discovered": 4, 00:12:15.855 "num_base_bdevs_operational": 4, 00:12:15.855 "base_bdevs_list": [ 00:12:15.855 { 00:12:15.855 "name": "pt1", 00:12:15.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.855 "is_configured": true, 00:12:15.855 "data_offset": 2048, 00:12:15.855 "data_size": 63488 00:12:15.855 }, 00:12:15.855 { 00:12:15.855 "name": "pt2", 00:12:15.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.855 "is_configured": true, 00:12:15.855 "data_offset": 2048, 00:12:15.855 "data_size": 63488 00:12:15.855 }, 00:12:15.855 { 00:12:15.855 "name": "pt3", 00:12:15.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.855 "is_configured": true, 00:12:15.855 "data_offset": 2048, 00:12:15.855 "data_size": 63488 00:12:15.855 }, 00:12:15.855 { 00:12:15.855 "name": "pt4", 00:12:15.855 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:15.855 "is_configured": true, 00:12:15.855 "data_offset": 2048, 00:12:15.855 "data_size": 63488 00:12:15.855 } 00:12:15.855 ] 00:12:15.855 }' 00:12:15.855 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.855 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.423 [2024-11-18 04:01:12.899276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.423 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.423 "name": "raid_bdev1", 00:12:16.423 "aliases": [ 00:12:16.423 "b60f1185-77e7-48dc-8022-5984ddd5e58e" 00:12:16.423 ], 00:12:16.423 "product_name": "Raid Volume", 00:12:16.423 "block_size": 512, 00:12:16.423 "num_blocks": 63488, 00:12:16.423 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:16.423 "assigned_rate_limits": { 00:12:16.423 "rw_ios_per_sec": 0, 00:12:16.423 "rw_mbytes_per_sec": 0, 00:12:16.423 "r_mbytes_per_sec": 0, 00:12:16.423 "w_mbytes_per_sec": 0 00:12:16.423 }, 00:12:16.423 "claimed": false, 00:12:16.423 "zoned": false, 00:12:16.423 "supported_io_types": { 00:12:16.423 "read": true, 00:12:16.423 "write": true, 00:12:16.423 "unmap": false, 00:12:16.423 "flush": false, 00:12:16.423 "reset": true, 00:12:16.423 "nvme_admin": false, 00:12:16.423 "nvme_io": false, 00:12:16.423 "nvme_io_md": false, 00:12:16.423 "write_zeroes": true, 00:12:16.423 "zcopy": false, 00:12:16.423 "get_zone_info": false, 00:12:16.423 "zone_management": false, 00:12:16.423 "zone_append": false, 00:12:16.423 "compare": false, 00:12:16.423 "compare_and_write": false, 00:12:16.423 "abort": false, 00:12:16.423 "seek_hole": false, 00:12:16.423 "seek_data": false, 00:12:16.423 "copy": false, 00:12:16.423 "nvme_iov_md": false 00:12:16.423 }, 00:12:16.423 "memory_domains": [ 00:12:16.423 { 00:12:16.423 "dma_device_id": "system", 00:12:16.423 "dma_device_type": 1 00:12:16.423 }, 00:12:16.423 { 00:12:16.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.423 "dma_device_type": 2 00:12:16.423 }, 00:12:16.423 { 00:12:16.423 "dma_device_id": "system", 00:12:16.423 "dma_device_type": 1 00:12:16.423 }, 00:12:16.423 { 00:12:16.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.423 "dma_device_type": 2 00:12:16.423 }, 00:12:16.423 { 00:12:16.423 "dma_device_id": "system", 00:12:16.423 "dma_device_type": 1 00:12:16.423 }, 00:12:16.423 { 00:12:16.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.423 "dma_device_type": 2 00:12:16.423 }, 00:12:16.423 { 00:12:16.423 "dma_device_id": "system", 00:12:16.423 "dma_device_type": 1 00:12:16.423 }, 00:12:16.423 { 00:12:16.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.423 "dma_device_type": 2 00:12:16.423 } 00:12:16.423 ], 00:12:16.423 "driver_specific": { 00:12:16.423 "raid": { 00:12:16.423 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:16.423 "strip_size_kb": 0, 00:12:16.423 "state": "online", 00:12:16.423 "raid_level": "raid1", 00:12:16.423 "superblock": true, 00:12:16.423 "num_base_bdevs": 4, 00:12:16.423 "num_base_bdevs_discovered": 4, 00:12:16.423 "num_base_bdevs_operational": 4, 00:12:16.423 "base_bdevs_list": [ 00:12:16.423 { 00:12:16.423 "name": "pt1", 00:12:16.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.423 "is_configured": true, 00:12:16.423 "data_offset": 2048, 00:12:16.423 "data_size": 63488 00:12:16.423 }, 00:12:16.423 { 00:12:16.423 "name": "pt2", 00:12:16.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.423 "is_configured": true, 00:12:16.423 "data_offset": 2048, 00:12:16.423 "data_size": 63488 00:12:16.423 }, 00:12:16.423 { 00:12:16.423 "name": "pt3", 00:12:16.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.423 "is_configured": true, 00:12:16.423 "data_offset": 2048, 00:12:16.424 "data_size": 63488 00:12:16.424 }, 00:12:16.424 { 00:12:16.424 "name": "pt4", 00:12:16.424 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.424 "is_configured": true, 00:12:16.424 "data_offset": 2048, 00:12:16.424 "data_size": 63488 00:12:16.424 } 00:12:16.424 ] 00:12:16.424 } 00:12:16.424 } 00:12:16.424 }' 00:12:16.424 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.424 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:16.424 pt2 00:12:16.424 pt3 00:12:16.424 pt4' 00:12:16.424 04:01:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.424 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.424 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.424 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:16.424 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.424 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.424 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.424 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.684 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.685 [2024-11-18 04:01:13.238626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b60f1185-77e7-48dc-8022-5984ddd5e58e '!=' b60f1185-77e7-48dc-8022-5984ddd5e58e ']' 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.685 [2024-11-18 04:01:13.270273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.685 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.943 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.943 "name": "raid_bdev1", 00:12:16.943 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:16.943 "strip_size_kb": 0, 00:12:16.943 "state": "online", 00:12:16.943 "raid_level": "raid1", 00:12:16.943 "superblock": true, 00:12:16.943 "num_base_bdevs": 4, 00:12:16.943 "num_base_bdevs_discovered": 3, 00:12:16.943 "num_base_bdevs_operational": 3, 00:12:16.943 "base_bdevs_list": [ 00:12:16.943 { 00:12:16.943 "name": null, 00:12:16.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.943 "is_configured": false, 00:12:16.943 "data_offset": 0, 00:12:16.943 "data_size": 63488 00:12:16.943 }, 00:12:16.943 { 00:12:16.943 "name": "pt2", 00:12:16.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.943 "is_configured": true, 00:12:16.943 "data_offset": 2048, 00:12:16.943 "data_size": 63488 00:12:16.943 }, 00:12:16.943 { 00:12:16.943 "name": "pt3", 00:12:16.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.943 "is_configured": true, 00:12:16.943 "data_offset": 2048, 00:12:16.943 "data_size": 63488 00:12:16.943 }, 00:12:16.943 { 00:12:16.943 "name": "pt4", 00:12:16.943 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.943 "is_configured": true, 00:12:16.943 "data_offset": 2048, 00:12:16.943 "data_size": 63488 00:12:16.943 } 00:12:16.943 ] 00:12:16.943 }' 00:12:16.943 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.943 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.203 [2024-11-18 04:01:13.697523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.203 [2024-11-18 04:01:13.697574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.203 [2024-11-18 04:01:13.697674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.203 [2024-11-18 04:01:13.697763] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.203 [2024-11-18 04:01:13.697774] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.203 [2024-11-18 04:01:13.777351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:17.203 [2024-11-18 04:01:13.777422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.203 [2024-11-18 04:01:13.777442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:17.203 [2024-11-18 04:01:13.777451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.203 [2024-11-18 04:01:13.780074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.203 [2024-11-18 04:01:13.780113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:17.203 [2024-11-18 04:01:13.780199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:17.203 [2024-11-18 04:01:13.780248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.203 pt2 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:17.203 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.204 "name": "raid_bdev1", 00:12:17.204 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:17.204 "strip_size_kb": 0, 00:12:17.204 "state": "configuring", 00:12:17.204 "raid_level": "raid1", 00:12:17.204 "superblock": true, 00:12:17.204 "num_base_bdevs": 4, 00:12:17.204 "num_base_bdevs_discovered": 1, 00:12:17.204 "num_base_bdevs_operational": 3, 00:12:17.204 "base_bdevs_list": [ 00:12:17.204 { 00:12:17.204 "name": null, 00:12:17.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.204 "is_configured": false, 00:12:17.204 "data_offset": 2048, 00:12:17.204 "data_size": 63488 00:12:17.204 }, 00:12:17.204 { 00:12:17.204 "name": "pt2", 00:12:17.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.204 "is_configured": true, 00:12:17.204 "data_offset": 2048, 00:12:17.204 "data_size": 63488 00:12:17.204 }, 00:12:17.204 { 00:12:17.204 "name": null, 00:12:17.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.204 "is_configured": false, 00:12:17.204 "data_offset": 2048, 00:12:17.204 "data_size": 63488 00:12:17.204 }, 00:12:17.204 { 00:12:17.204 "name": null, 00:12:17.204 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.204 "is_configured": false, 00:12:17.204 "data_offset": 2048, 00:12:17.204 "data_size": 63488 00:12:17.204 } 00:12:17.204 ] 00:12:17.204 }' 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.204 04:01:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.774 [2024-11-18 04:01:14.217707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:17.774 [2024-11-18 04:01:14.217838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.774 [2024-11-18 04:01:14.217878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:17.774 [2024-11-18 04:01:14.217891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.774 [2024-11-18 04:01:14.218711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.774 [2024-11-18 04:01:14.218747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:17.774 [2024-11-18 04:01:14.218970] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:17.774 [2024-11-18 04:01:14.219008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:17.774 pt3 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.774 "name": "raid_bdev1", 00:12:17.774 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:17.774 "strip_size_kb": 0, 00:12:17.774 "state": "configuring", 00:12:17.774 "raid_level": "raid1", 00:12:17.774 "superblock": true, 00:12:17.774 "num_base_bdevs": 4, 00:12:17.774 "num_base_bdevs_discovered": 2, 00:12:17.774 "num_base_bdevs_operational": 3, 00:12:17.774 "base_bdevs_list": [ 00:12:17.774 { 00:12:17.774 "name": null, 00:12:17.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.774 "is_configured": false, 00:12:17.774 "data_offset": 2048, 00:12:17.774 "data_size": 63488 00:12:17.774 }, 00:12:17.774 { 00:12:17.774 "name": "pt2", 00:12:17.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.774 "is_configured": true, 00:12:17.774 "data_offset": 2048, 00:12:17.774 "data_size": 63488 00:12:17.774 }, 00:12:17.774 { 00:12:17.774 "name": "pt3", 00:12:17.774 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.774 "is_configured": true, 00:12:17.774 "data_offset": 2048, 00:12:17.774 "data_size": 63488 00:12:17.774 }, 00:12:17.774 { 00:12:17.774 "name": null, 00:12:17.774 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.774 "is_configured": false, 00:12:17.774 "data_offset": 2048, 00:12:17.774 "data_size": 63488 00:12:17.774 } 00:12:17.774 ] 00:12:17.774 }' 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.774 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.343 [2024-11-18 04:01:14.712888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:18.343 [2024-11-18 04:01:14.713082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.343 [2024-11-18 04:01:14.713137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:18.343 [2024-11-18 04:01:14.713170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.343 [2024-11-18 04:01:14.713733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.343 [2024-11-18 04:01:14.713797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:18.343 [2024-11-18 04:01:14.713938] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:18.343 [2024-11-18 04:01:14.714007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:18.343 [2024-11-18 04:01:14.714257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:18.343 [2024-11-18 04:01:14.714303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:18.343 [2024-11-18 04:01:14.714602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:18.343 [2024-11-18 04:01:14.714802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:18.343 [2024-11-18 04:01:14.714862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:18.343 [2024-11-18 04:01:14.715055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.343 pt4 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.343 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.344 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.344 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.344 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.344 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.344 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.344 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.344 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.344 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.344 "name": "raid_bdev1", 00:12:18.344 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:18.344 "strip_size_kb": 0, 00:12:18.344 "state": "online", 00:12:18.344 "raid_level": "raid1", 00:12:18.344 "superblock": true, 00:12:18.344 "num_base_bdevs": 4, 00:12:18.344 "num_base_bdevs_discovered": 3, 00:12:18.344 "num_base_bdevs_operational": 3, 00:12:18.344 "base_bdevs_list": [ 00:12:18.344 { 00:12:18.344 "name": null, 00:12:18.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.344 "is_configured": false, 00:12:18.344 "data_offset": 2048, 00:12:18.344 "data_size": 63488 00:12:18.344 }, 00:12:18.344 { 00:12:18.344 "name": "pt2", 00:12:18.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.344 "is_configured": true, 00:12:18.344 "data_offset": 2048, 00:12:18.344 "data_size": 63488 00:12:18.344 }, 00:12:18.344 { 00:12:18.344 "name": "pt3", 00:12:18.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.344 "is_configured": true, 00:12:18.344 "data_offset": 2048, 00:12:18.344 "data_size": 63488 00:12:18.344 }, 00:12:18.344 { 00:12:18.344 "name": "pt4", 00:12:18.344 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:18.344 "is_configured": true, 00:12:18.344 "data_offset": 2048, 00:12:18.344 "data_size": 63488 00:12:18.344 } 00:12:18.344 ] 00:12:18.344 }' 00:12:18.344 04:01:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.344 04:01:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.603 [2024-11-18 04:01:15.172012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.603 [2024-11-18 04:01:15.172156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.603 [2024-11-18 04:01:15.172275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.603 [2024-11-18 04:01:15.172367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.603 [2024-11-18 04:01:15.172383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.603 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.863 [2024-11-18 04:01:15.243870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:18.863 [2024-11-18 04:01:15.243938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.863 [2024-11-18 04:01:15.243956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:18.863 [2024-11-18 04:01:15.243968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.863 [2024-11-18 04:01:15.246548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.863 [2024-11-18 04:01:15.246643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:18.863 [2024-11-18 04:01:15.246736] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:18.863 [2024-11-18 04:01:15.246786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:18.863 [2024-11-18 04:01:15.246972] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:18.863 [2024-11-18 04:01:15.246988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.863 [2024-11-18 04:01:15.247004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:18.863 [2024-11-18 04:01:15.247066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:18.863 [2024-11-18 04:01:15.247171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:18.863 pt1 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.863 "name": "raid_bdev1", 00:12:18.863 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:18.863 "strip_size_kb": 0, 00:12:18.863 "state": "configuring", 00:12:18.863 "raid_level": "raid1", 00:12:18.863 "superblock": true, 00:12:18.863 "num_base_bdevs": 4, 00:12:18.863 "num_base_bdevs_discovered": 2, 00:12:18.863 "num_base_bdevs_operational": 3, 00:12:18.863 "base_bdevs_list": [ 00:12:18.863 { 00:12:18.863 "name": null, 00:12:18.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.863 "is_configured": false, 00:12:18.863 "data_offset": 2048, 00:12:18.863 "data_size": 63488 00:12:18.863 }, 00:12:18.863 { 00:12:18.863 "name": "pt2", 00:12:18.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:18.863 "is_configured": true, 00:12:18.863 "data_offset": 2048, 00:12:18.863 "data_size": 63488 00:12:18.863 }, 00:12:18.863 { 00:12:18.863 "name": "pt3", 00:12:18.863 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:18.863 "is_configured": true, 00:12:18.863 "data_offset": 2048, 00:12:18.863 "data_size": 63488 00:12:18.863 }, 00:12:18.863 { 00:12:18.863 "name": null, 00:12:18.863 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:18.863 "is_configured": false, 00:12:18.863 "data_offset": 2048, 00:12:18.863 "data_size": 63488 00:12:18.863 } 00:12:18.863 ] 00:12:18.863 }' 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.863 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 [2024-11-18 04:01:15.715071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:19.123 [2024-11-18 04:01:15.715230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.123 [2024-11-18 04:01:15.715261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:19.123 [2024-11-18 04:01:15.715270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.123 [2024-11-18 04:01:15.715821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.123 [2024-11-18 04:01:15.715862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:19.123 [2024-11-18 04:01:15.715959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:19.123 [2024-11-18 04:01:15.715991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:19.123 [2024-11-18 04:01:15.716126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:19.123 [2024-11-18 04:01:15.716134] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.123 [2024-11-18 04:01:15.716399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:19.123 [2024-11-18 04:01:15.716551] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:19.123 [2024-11-18 04:01:15.716563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:19.123 [2024-11-18 04:01:15.716715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.123 pt4 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.123 "name": "raid_bdev1", 00:12:19.123 "uuid": "b60f1185-77e7-48dc-8022-5984ddd5e58e", 00:12:19.123 "strip_size_kb": 0, 00:12:19.123 "state": "online", 00:12:19.123 "raid_level": "raid1", 00:12:19.123 "superblock": true, 00:12:19.123 "num_base_bdevs": 4, 00:12:19.123 "num_base_bdevs_discovered": 3, 00:12:19.123 "num_base_bdevs_operational": 3, 00:12:19.123 "base_bdevs_list": [ 00:12:19.123 { 00:12:19.123 "name": null, 00:12:19.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.123 "is_configured": false, 00:12:19.123 "data_offset": 2048, 00:12:19.123 "data_size": 63488 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "name": "pt2", 00:12:19.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.123 "is_configured": true, 00:12:19.123 "data_offset": 2048, 00:12:19.123 "data_size": 63488 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "name": "pt3", 00:12:19.123 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.123 "is_configured": true, 00:12:19.123 "data_offset": 2048, 00:12:19.123 "data_size": 63488 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "name": "pt4", 00:12:19.123 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:19.123 "is_configured": true, 00:12:19.123 "data_offset": 2048, 00:12:19.123 "data_size": 63488 00:12:19.123 } 00:12:19.123 ] 00:12:19.123 }' 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.123 04:01:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.693 [2024-11-18 04:01:16.202667] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b60f1185-77e7-48dc-8022-5984ddd5e58e '!=' b60f1185-77e7-48dc-8022-5984ddd5e58e ']' 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74505 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74505 ']' 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74505 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74505 00:12:19.693 killing process with pid 74505 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74505' 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74505 00:12:19.693 [2024-11-18 04:01:16.281496] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.693 [2024-11-18 04:01:16.281629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.693 04:01:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74505 00:12:19.693 [2024-11-18 04:01:16.281714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.693 [2024-11-18 04:01:16.281728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:20.263 [2024-11-18 04:01:16.703675] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.643 04:01:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:21.643 00:12:21.643 real 0m8.695s 00:12:21.643 user 0m13.556s 00:12:21.643 sys 0m1.622s 00:12:21.643 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.643 04:01:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.643 ************************************ 00:12:21.643 END TEST raid_superblock_test 00:12:21.643 ************************************ 00:12:21.643 04:01:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:21.643 04:01:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:21.643 04:01:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.643 04:01:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.643 ************************************ 00:12:21.643 START TEST raid_read_error_test 00:12:21.643 ************************************ 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:21.643 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pox30hpVO2 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74998 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74998 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74998 ']' 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.644 04:01:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.644 [2024-11-18 04:01:18.059386] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:21.644 [2024-11-18 04:01:18.059512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74998 ] 00:12:21.644 [2024-11-18 04:01:18.235446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.904 [2024-11-18 04:01:18.380756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.164 [2024-11-18 04:01:18.609582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.164 [2024-11-18 04:01:18.609653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.422 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.422 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:22.422 04:01:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:22.422 04:01:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:22.422 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.422 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.422 BaseBdev1_malloc 00:12:22.422 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.422 04:01:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:22.423 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.423 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.423 true 00:12:22.423 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.423 04:01:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:22.423 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.423 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.423 [2024-11-18 04:01:18.947663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:22.423 [2024-11-18 04:01:18.947733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.423 [2024-11-18 04:01:18.947755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:22.423 [2024-11-18 04:01:18.947767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.423 [2024-11-18 04:01:18.950158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.423 [2024-11-18 04:01:18.950195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:22.423 BaseBdev1 00:12:22.423 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.423 04:01:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:22.423 04:01:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:22.423 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.423 04:01:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.423 BaseBdev2_malloc 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.423 true 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.423 [2024-11-18 04:01:19.020805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:22.423 [2024-11-18 04:01:19.020876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.423 [2024-11-18 04:01:19.020892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:22.423 [2024-11-18 04:01:19.020903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.423 [2024-11-18 04:01:19.023159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.423 [2024-11-18 04:01:19.023194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:22.423 BaseBdev2 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.423 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.690 BaseBdev3_malloc 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.690 true 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.690 [2024-11-18 04:01:19.103576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:22.690 [2024-11-18 04:01:19.103633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.690 [2024-11-18 04:01:19.103652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:22.690 [2024-11-18 04:01:19.103663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.690 [2024-11-18 04:01:19.105984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.690 [2024-11-18 04:01:19.106017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:22.690 BaseBdev3 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.690 BaseBdev4_malloc 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.690 true 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.690 [2024-11-18 04:01:19.174374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:22.690 [2024-11-18 04:01:19.174422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.690 [2024-11-18 04:01:19.174439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:22.690 [2024-11-18 04:01:19.174463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.690 [2024-11-18 04:01:19.176730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.690 [2024-11-18 04:01:19.176765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:22.690 BaseBdev4 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.690 [2024-11-18 04:01:19.186423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.690 [2024-11-18 04:01:19.188431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.690 [2024-11-18 04:01:19.188506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:22.690 [2024-11-18 04:01:19.188565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:22.690 [2024-11-18 04:01:19.188778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:22.690 [2024-11-18 04:01:19.188796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.690 [2024-11-18 04:01:19.189034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:22.690 [2024-11-18 04:01:19.189200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:22.690 [2024-11-18 04:01:19.189215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:22.690 [2024-11-18 04:01:19.189364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.690 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.690 "name": "raid_bdev1", 00:12:22.690 "uuid": "6af8a7a9-868f-4930-b56d-43785f7a78dc", 00:12:22.690 "strip_size_kb": 0, 00:12:22.690 "state": "online", 00:12:22.690 "raid_level": "raid1", 00:12:22.690 "superblock": true, 00:12:22.690 "num_base_bdevs": 4, 00:12:22.690 "num_base_bdevs_discovered": 4, 00:12:22.690 "num_base_bdevs_operational": 4, 00:12:22.690 "base_bdevs_list": [ 00:12:22.690 { 00:12:22.690 "name": "BaseBdev1", 00:12:22.691 "uuid": "c984036e-b600-586e-a31d-d9e0d02f561a", 00:12:22.691 "is_configured": true, 00:12:22.691 "data_offset": 2048, 00:12:22.691 "data_size": 63488 00:12:22.691 }, 00:12:22.691 { 00:12:22.691 "name": "BaseBdev2", 00:12:22.691 "uuid": "fee6ab53-0671-59e4-bdb9-4c0475736d26", 00:12:22.691 "is_configured": true, 00:12:22.691 "data_offset": 2048, 00:12:22.691 "data_size": 63488 00:12:22.691 }, 00:12:22.691 { 00:12:22.691 "name": "BaseBdev3", 00:12:22.691 "uuid": "77e5bca7-678e-502f-b6b4-6cca5478821d", 00:12:22.691 "is_configured": true, 00:12:22.691 "data_offset": 2048, 00:12:22.691 "data_size": 63488 00:12:22.691 }, 00:12:22.691 { 00:12:22.691 "name": "BaseBdev4", 00:12:22.691 "uuid": "247af948-dd9a-596f-928e-60d348c54d5e", 00:12:22.691 "is_configured": true, 00:12:22.691 "data_offset": 2048, 00:12:22.691 "data_size": 63488 00:12:22.691 } 00:12:22.691 ] 00:12:22.691 }' 00:12:22.691 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.691 04:01:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.310 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:23.310 04:01:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:23.310 [2024-11-18 04:01:19.687142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.267 "name": "raid_bdev1", 00:12:24.267 "uuid": "6af8a7a9-868f-4930-b56d-43785f7a78dc", 00:12:24.267 "strip_size_kb": 0, 00:12:24.267 "state": "online", 00:12:24.267 "raid_level": "raid1", 00:12:24.267 "superblock": true, 00:12:24.267 "num_base_bdevs": 4, 00:12:24.267 "num_base_bdevs_discovered": 4, 00:12:24.267 "num_base_bdevs_operational": 4, 00:12:24.267 "base_bdevs_list": [ 00:12:24.267 { 00:12:24.267 "name": "BaseBdev1", 00:12:24.267 "uuid": "c984036e-b600-586e-a31d-d9e0d02f561a", 00:12:24.267 "is_configured": true, 00:12:24.267 "data_offset": 2048, 00:12:24.267 "data_size": 63488 00:12:24.267 }, 00:12:24.267 { 00:12:24.267 "name": "BaseBdev2", 00:12:24.267 "uuid": "fee6ab53-0671-59e4-bdb9-4c0475736d26", 00:12:24.267 "is_configured": true, 00:12:24.267 "data_offset": 2048, 00:12:24.267 "data_size": 63488 00:12:24.267 }, 00:12:24.267 { 00:12:24.267 "name": "BaseBdev3", 00:12:24.267 "uuid": "77e5bca7-678e-502f-b6b4-6cca5478821d", 00:12:24.267 "is_configured": true, 00:12:24.267 "data_offset": 2048, 00:12:24.267 "data_size": 63488 00:12:24.267 }, 00:12:24.267 { 00:12:24.267 "name": "BaseBdev4", 00:12:24.267 "uuid": "247af948-dd9a-596f-928e-60d348c54d5e", 00:12:24.267 "is_configured": true, 00:12:24.267 "data_offset": 2048, 00:12:24.267 "data_size": 63488 00:12:24.267 } 00:12:24.267 ] 00:12:24.267 }' 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.267 04:01:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.528 [2024-11-18 04:01:21.044126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.528 [2024-11-18 04:01:21.044179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.528 [2024-11-18 04:01:21.046659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.528 [2024-11-18 04:01:21.046730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.528 [2024-11-18 04:01:21.046865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.528 [2024-11-18 04:01:21.046886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:24.528 { 00:12:24.528 "results": [ 00:12:24.528 { 00:12:24.528 "job": "raid_bdev1", 00:12:24.528 "core_mask": "0x1", 00:12:24.528 "workload": "randrw", 00:12:24.528 "percentage": 50, 00:12:24.528 "status": "finished", 00:12:24.528 "queue_depth": 1, 00:12:24.528 "io_size": 131072, 00:12:24.528 "runtime": 1.357466, 00:12:24.528 "iops": 7846.237032824395, 00:12:24.528 "mibps": 980.7796291030494, 00:12:24.528 "io_failed": 0, 00:12:24.528 "io_timeout": 0, 00:12:24.528 "avg_latency_us": 124.91969763996984, 00:12:24.528 "min_latency_us": 22.134497816593885, 00:12:24.528 "max_latency_us": 1337.907423580786 00:12:24.528 } 00:12:24.528 ], 00:12:24.528 "core_count": 1 00:12:24.528 } 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74998 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74998 ']' 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74998 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74998 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.528 killing process with pid 74998 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74998' 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74998 00:12:24.528 [2024-11-18 04:01:21.092328] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.528 04:01:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74998 00:12:25.099 [2024-11-18 04:01:21.441869] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.039 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pox30hpVO2 00:12:26.039 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:26.039 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:26.300 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:26.300 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:26.300 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:26.300 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:26.300 04:01:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:26.300 00:12:26.300 real 0m4.731s 00:12:26.300 user 0m5.401s 00:12:26.300 sys 0m0.666s 00:12:26.300 04:01:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.300 04:01:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.300 ************************************ 00:12:26.300 END TEST raid_read_error_test 00:12:26.300 ************************************ 00:12:26.300 04:01:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:26.300 04:01:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:26.300 04:01:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.300 04:01:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.300 ************************************ 00:12:26.300 START TEST raid_write_error_test 00:12:26.300 ************************************ 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PTpkldUv50 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75142 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75142 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75142 ']' 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.300 04:01:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.300 [2024-11-18 04:01:22.860988] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:26.300 [2024-11-18 04:01:22.861096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75142 ] 00:12:26.560 [2024-11-18 04:01:23.033976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.560 [2024-11-18 04:01:23.172560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.821 [2024-11-18 04:01:23.407262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.821 [2024-11-18 04:01:23.407342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.081 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.081 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:27.081 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.081 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:27.081 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.081 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 BaseBdev1_malloc 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 true 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 [2024-11-18 04:01:23.750019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:27.341 [2024-11-18 04:01:23.750082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.341 [2024-11-18 04:01:23.750103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:27.341 [2024-11-18 04:01:23.750115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.341 [2024-11-18 04:01:23.752501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.341 [2024-11-18 04:01:23.752536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:27.341 BaseBdev1 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 BaseBdev2_malloc 00:12:27.341 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.342 true 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.342 [2024-11-18 04:01:23.822769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:27.342 [2024-11-18 04:01:23.822837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.342 [2024-11-18 04:01:23.822856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:27.342 [2024-11-18 04:01:23.822867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.342 [2024-11-18 04:01:23.825381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.342 [2024-11-18 04:01:23.825417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:27.342 BaseBdev2 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.342 BaseBdev3_malloc 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.342 true 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.342 [2024-11-18 04:01:23.918375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:27.342 [2024-11-18 04:01:23.918439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.342 [2024-11-18 04:01:23.918460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:27.342 [2024-11-18 04:01:23.918471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.342 [2024-11-18 04:01:23.920938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.342 [2024-11-18 04:01:23.920973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:27.342 BaseBdev3 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.342 BaseBdev4_malloc 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.342 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 true 00:12:27.602 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.602 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:27.602 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.602 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.602 [2024-11-18 04:01:23.990979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:27.603 [2024-11-18 04:01:23.991042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.603 [2024-11-18 04:01:23.991061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:27.603 [2024-11-18 04:01:23.991072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.603 [2024-11-18 04:01:23.993451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.603 [2024-11-18 04:01:23.993488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:27.603 BaseBdev4 00:12:27.603 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.603 04:01:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:27.603 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.603 04:01:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.603 [2024-11-18 04:01:24.003024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.603 [2024-11-18 04:01:24.005115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.603 [2024-11-18 04:01:24.005193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:27.603 [2024-11-18 04:01:24.005256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:27.603 [2024-11-18 04:01:24.005479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:27.603 [2024-11-18 04:01:24.005499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.603 [2024-11-18 04:01:24.005743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:27.603 [2024-11-18 04:01:24.005934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:27.603 [2024-11-18 04:01:24.005948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:27.603 [2024-11-18 04:01:24.006116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.603 "name": "raid_bdev1", 00:12:27.603 "uuid": "41f08ff0-290b-461a-bb51-cb34cd8ac724", 00:12:27.603 "strip_size_kb": 0, 00:12:27.603 "state": "online", 00:12:27.603 "raid_level": "raid1", 00:12:27.603 "superblock": true, 00:12:27.603 "num_base_bdevs": 4, 00:12:27.603 "num_base_bdevs_discovered": 4, 00:12:27.603 "num_base_bdevs_operational": 4, 00:12:27.603 "base_bdevs_list": [ 00:12:27.603 { 00:12:27.603 "name": "BaseBdev1", 00:12:27.603 "uuid": "da5ca647-21b6-5420-941d-d38bb63d3de8", 00:12:27.603 "is_configured": true, 00:12:27.603 "data_offset": 2048, 00:12:27.603 "data_size": 63488 00:12:27.603 }, 00:12:27.603 { 00:12:27.603 "name": "BaseBdev2", 00:12:27.603 "uuid": "086d558a-f84f-505a-b8e0-f5158983eef0", 00:12:27.603 "is_configured": true, 00:12:27.603 "data_offset": 2048, 00:12:27.603 "data_size": 63488 00:12:27.603 }, 00:12:27.603 { 00:12:27.603 "name": "BaseBdev3", 00:12:27.603 "uuid": "bb331822-593b-5f31-94ee-32af7d825a7d", 00:12:27.603 "is_configured": true, 00:12:27.603 "data_offset": 2048, 00:12:27.603 "data_size": 63488 00:12:27.603 }, 00:12:27.603 { 00:12:27.603 "name": "BaseBdev4", 00:12:27.603 "uuid": "fd9e8476-52b8-5e2c-a691-1cafa684d068", 00:12:27.603 "is_configured": true, 00:12:27.603 "data_offset": 2048, 00:12:27.603 "data_size": 63488 00:12:27.603 } 00:12:27.603 ] 00:12:27.603 }' 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.603 04:01:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.863 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:27.863 04:01:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:28.123 [2024-11-18 04:01:24.519605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.063 [2024-11-18 04:01:25.444996] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:29.063 [2024-11-18 04:01:25.445067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:29.063 [2024-11-18 04:01:25.445322] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.063 "name": "raid_bdev1", 00:12:29.063 "uuid": "41f08ff0-290b-461a-bb51-cb34cd8ac724", 00:12:29.063 "strip_size_kb": 0, 00:12:29.063 "state": "online", 00:12:29.063 "raid_level": "raid1", 00:12:29.063 "superblock": true, 00:12:29.063 "num_base_bdevs": 4, 00:12:29.063 "num_base_bdevs_discovered": 3, 00:12:29.063 "num_base_bdevs_operational": 3, 00:12:29.063 "base_bdevs_list": [ 00:12:29.063 { 00:12:29.063 "name": null, 00:12:29.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.063 "is_configured": false, 00:12:29.063 "data_offset": 0, 00:12:29.063 "data_size": 63488 00:12:29.063 }, 00:12:29.063 { 00:12:29.063 "name": "BaseBdev2", 00:12:29.063 "uuid": "086d558a-f84f-505a-b8e0-f5158983eef0", 00:12:29.063 "is_configured": true, 00:12:29.063 "data_offset": 2048, 00:12:29.063 "data_size": 63488 00:12:29.063 }, 00:12:29.063 { 00:12:29.063 "name": "BaseBdev3", 00:12:29.063 "uuid": "bb331822-593b-5f31-94ee-32af7d825a7d", 00:12:29.063 "is_configured": true, 00:12:29.063 "data_offset": 2048, 00:12:29.063 "data_size": 63488 00:12:29.063 }, 00:12:29.063 { 00:12:29.063 "name": "BaseBdev4", 00:12:29.063 "uuid": "fd9e8476-52b8-5e2c-a691-1cafa684d068", 00:12:29.063 "is_configured": true, 00:12:29.063 "data_offset": 2048, 00:12:29.063 "data_size": 63488 00:12:29.063 } 00:12:29.063 ] 00:12:29.063 }' 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.063 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.323 [2024-11-18 04:01:25.923955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.323 [2024-11-18 04:01:25.924003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.323 [2024-11-18 04:01:25.926560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.323 [2024-11-18 04:01:25.926614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.323 [2024-11-18 04:01:25.926726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.323 [2024-11-18 04:01:25.926744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:29.323 { 00:12:29.323 "results": [ 00:12:29.323 { 00:12:29.323 "job": "raid_bdev1", 00:12:29.323 "core_mask": "0x1", 00:12:29.323 "workload": "randrw", 00:12:29.323 "percentage": 50, 00:12:29.323 "status": "finished", 00:12:29.323 "queue_depth": 1, 00:12:29.323 "io_size": 131072, 00:12:29.323 "runtime": 1.404982, 00:12:29.323 "iops": 8704.737854292795, 00:12:29.323 "mibps": 1088.0922317865993, 00:12:29.323 "io_failed": 0, 00:12:29.323 "io_timeout": 0, 00:12:29.323 "avg_latency_us": 112.3607251122053, 00:12:29.323 "min_latency_us": 22.022707423580787, 00:12:29.323 "max_latency_us": 1459.5353711790392 00:12:29.323 } 00:12:29.323 ], 00:12:29.323 "core_count": 1 00:12:29.323 } 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75142 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75142 ']' 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75142 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75142 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.323 killing process with pid 75142 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75142' 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75142 00:12:29.323 [2024-11-18 04:01:25.961580] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.323 04:01:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75142 00:12:29.893 [2024-11-18 04:01:26.316081] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.277 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PTpkldUv50 00:12:31.277 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:31.277 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:31.277 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:31.277 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:31.277 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:31.277 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:31.277 04:01:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:31.277 00:12:31.277 real 0m4.824s 00:12:31.277 user 0m5.535s 00:12:31.277 sys 0m0.693s 00:12:31.277 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.277 04:01:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.277 ************************************ 00:12:31.277 END TEST raid_write_error_test 00:12:31.277 ************************************ 00:12:31.277 04:01:27 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:31.277 04:01:27 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:31.277 04:01:27 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:31.277 04:01:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:31.277 04:01:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.277 04:01:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.277 ************************************ 00:12:31.277 START TEST raid_rebuild_test 00:12:31.277 ************************************ 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75287 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75287 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75287 ']' 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.277 04:01:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.277 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:31.277 Zero copy mechanism will not be used. 00:12:31.277 [2024-11-18 04:01:27.749829] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:31.277 [2024-11-18 04:01:27.749958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75287 ] 00:12:31.537 [2024-11-18 04:01:27.924972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.537 [2024-11-18 04:01:28.065828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.797 [2024-11-18 04:01:28.304524] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.797 [2024-11-18 04:01:28.304579] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.058 BaseBdev1_malloc 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.058 [2024-11-18 04:01:28.625028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:32.058 [2024-11-18 04:01:28.625107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.058 [2024-11-18 04:01:28.625133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:32.058 [2024-11-18 04:01:28.625146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.058 [2024-11-18 04:01:28.627415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.058 [2024-11-18 04:01:28.627450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.058 BaseBdev1 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.058 BaseBdev2_malloc 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.058 [2024-11-18 04:01:28.683991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:32.058 [2024-11-18 04:01:28.684060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.058 [2024-11-18 04:01:28.684083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:32.058 [2024-11-18 04:01:28.684094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.058 [2024-11-18 04:01:28.686443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.058 [2024-11-18 04:01:28.686481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:32.058 BaseBdev2 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.058 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.318 spare_malloc 00:12:32.318 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.318 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:32.318 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.318 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.318 spare_delay 00:12:32.318 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.318 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.318 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.318 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.319 [2024-11-18 04:01:28.770857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.319 [2024-11-18 04:01:28.770922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.319 [2024-11-18 04:01:28.770942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:32.319 [2024-11-18 04:01:28.770954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.319 [2024-11-18 04:01:28.773240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.319 [2024-11-18 04:01:28.773276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.319 spare 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.319 [2024-11-18 04:01:28.782885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.319 [2024-11-18 04:01:28.784871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.319 [2024-11-18 04:01:28.784960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:32.319 [2024-11-18 04:01:28.784973] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:32.319 [2024-11-18 04:01:28.785232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:32.319 [2024-11-18 04:01:28.785395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:32.319 [2024-11-18 04:01:28.785411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:32.319 [2024-11-18 04:01:28.785559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.319 "name": "raid_bdev1", 00:12:32.319 "uuid": "87a5a2aa-2e6b-46bf-b46f-4ffb7efbca3e", 00:12:32.319 "strip_size_kb": 0, 00:12:32.319 "state": "online", 00:12:32.319 "raid_level": "raid1", 00:12:32.319 "superblock": false, 00:12:32.319 "num_base_bdevs": 2, 00:12:32.319 "num_base_bdevs_discovered": 2, 00:12:32.319 "num_base_bdevs_operational": 2, 00:12:32.319 "base_bdevs_list": [ 00:12:32.319 { 00:12:32.319 "name": "BaseBdev1", 00:12:32.319 "uuid": "d5b0d45c-c453-508b-b0e4-4846233ceb85", 00:12:32.319 "is_configured": true, 00:12:32.319 "data_offset": 0, 00:12:32.319 "data_size": 65536 00:12:32.319 }, 00:12:32.319 { 00:12:32.319 "name": "BaseBdev2", 00:12:32.319 "uuid": "a0313aaf-e4ab-58a2-b3c6-e0dbfae00d24", 00:12:32.319 "is_configured": true, 00:12:32.319 "data_offset": 0, 00:12:32.319 "data_size": 65536 00:12:32.319 } 00:12:32.319 ] 00:12:32.319 }' 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.319 04:01:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.889 [2024-11-18 04:01:29.254393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:32.889 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:32.889 [2024-11-18 04:01:29.509691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:32.889 /dev/nbd0 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.150 1+0 records in 00:12:33.150 1+0 records out 00:12:33.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358069 s, 11.4 MB/s 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:33.150 04:01:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:38.426 65536+0 records in 00:12:38.426 65536+0 records out 00:12:38.426 33554432 bytes (34 MB, 32 MiB) copied, 4.47394 s, 7.5 MB/s 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:38.426 [2024-11-18 04:01:34.317563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.426 [2024-11-18 04:01:34.333676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.426 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.426 "name": "raid_bdev1", 00:12:38.426 "uuid": "87a5a2aa-2e6b-46bf-b46f-4ffb7efbca3e", 00:12:38.426 "strip_size_kb": 0, 00:12:38.426 "state": "online", 00:12:38.426 "raid_level": "raid1", 00:12:38.426 "superblock": false, 00:12:38.426 "num_base_bdevs": 2, 00:12:38.426 "num_base_bdevs_discovered": 1, 00:12:38.426 "num_base_bdevs_operational": 1, 00:12:38.426 "base_bdevs_list": [ 00:12:38.426 { 00:12:38.426 "name": null, 00:12:38.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.426 "is_configured": false, 00:12:38.426 "data_offset": 0, 00:12:38.426 "data_size": 65536 00:12:38.426 }, 00:12:38.426 { 00:12:38.426 "name": "BaseBdev2", 00:12:38.426 "uuid": "a0313aaf-e4ab-58a2-b3c6-e0dbfae00d24", 00:12:38.426 "is_configured": true, 00:12:38.426 "data_offset": 0, 00:12:38.426 "data_size": 65536 00:12:38.426 } 00:12:38.427 ] 00:12:38.427 }' 00:12:38.427 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.427 04:01:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.427 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:38.427 04:01:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.427 04:01:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.427 [2024-11-18 04:01:34.777007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:38.427 [2024-11-18 04:01:34.799027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:38.427 04:01:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.427 04:01:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:38.427 [2024-11-18 04:01:34.801638] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.363 "name": "raid_bdev1", 00:12:39.363 "uuid": "87a5a2aa-2e6b-46bf-b46f-4ffb7efbca3e", 00:12:39.363 "strip_size_kb": 0, 00:12:39.363 "state": "online", 00:12:39.363 "raid_level": "raid1", 00:12:39.363 "superblock": false, 00:12:39.363 "num_base_bdevs": 2, 00:12:39.363 "num_base_bdevs_discovered": 2, 00:12:39.363 "num_base_bdevs_operational": 2, 00:12:39.363 "process": { 00:12:39.363 "type": "rebuild", 00:12:39.363 "target": "spare", 00:12:39.363 "progress": { 00:12:39.363 "blocks": 20480, 00:12:39.363 "percent": 31 00:12:39.363 } 00:12:39.363 }, 00:12:39.363 "base_bdevs_list": [ 00:12:39.363 { 00:12:39.363 "name": "spare", 00:12:39.363 "uuid": "f9b9cca6-c2d7-5d36-8233-862dbafd2e16", 00:12:39.363 "is_configured": true, 00:12:39.363 "data_offset": 0, 00:12:39.363 "data_size": 65536 00:12:39.363 }, 00:12:39.363 { 00:12:39.363 "name": "BaseBdev2", 00:12:39.363 "uuid": "a0313aaf-e4ab-58a2-b3c6-e0dbfae00d24", 00:12:39.363 "is_configured": true, 00:12:39.363 "data_offset": 0, 00:12:39.363 "data_size": 65536 00:12:39.363 } 00:12:39.363 ] 00:12:39.363 }' 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.363 04:01:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.363 [2024-11-18 04:01:35.940600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:39.623 [2024-11-18 04:01:36.012483] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:39.623 [2024-11-18 04:01:36.012580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.623 [2024-11-18 04:01:36.012598] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:39.623 [2024-11-18 04:01:36.012611] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.624 "name": "raid_bdev1", 00:12:39.624 "uuid": "87a5a2aa-2e6b-46bf-b46f-4ffb7efbca3e", 00:12:39.624 "strip_size_kb": 0, 00:12:39.624 "state": "online", 00:12:39.624 "raid_level": "raid1", 00:12:39.624 "superblock": false, 00:12:39.624 "num_base_bdevs": 2, 00:12:39.624 "num_base_bdevs_discovered": 1, 00:12:39.624 "num_base_bdevs_operational": 1, 00:12:39.624 "base_bdevs_list": [ 00:12:39.624 { 00:12:39.624 "name": null, 00:12:39.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.624 "is_configured": false, 00:12:39.624 "data_offset": 0, 00:12:39.624 "data_size": 65536 00:12:39.624 }, 00:12:39.624 { 00:12:39.624 "name": "BaseBdev2", 00:12:39.624 "uuid": "a0313aaf-e4ab-58a2-b3c6-e0dbfae00d24", 00:12:39.624 "is_configured": true, 00:12:39.624 "data_offset": 0, 00:12:39.624 "data_size": 65536 00:12:39.624 } 00:12:39.624 ] 00:12:39.624 }' 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.624 04:01:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.884 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:39.884 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.884 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:39.884 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:39.884 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.884 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.884 04:01:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.884 04:01:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.884 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.884 04:01:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.143 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.143 "name": "raid_bdev1", 00:12:40.144 "uuid": "87a5a2aa-2e6b-46bf-b46f-4ffb7efbca3e", 00:12:40.144 "strip_size_kb": 0, 00:12:40.144 "state": "online", 00:12:40.144 "raid_level": "raid1", 00:12:40.144 "superblock": false, 00:12:40.144 "num_base_bdevs": 2, 00:12:40.144 "num_base_bdevs_discovered": 1, 00:12:40.144 "num_base_bdevs_operational": 1, 00:12:40.144 "base_bdevs_list": [ 00:12:40.144 { 00:12:40.144 "name": null, 00:12:40.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.144 "is_configured": false, 00:12:40.144 "data_offset": 0, 00:12:40.144 "data_size": 65536 00:12:40.144 }, 00:12:40.144 { 00:12:40.144 "name": "BaseBdev2", 00:12:40.144 "uuid": "a0313aaf-e4ab-58a2-b3c6-e0dbfae00d24", 00:12:40.144 "is_configured": true, 00:12:40.144 "data_offset": 0, 00:12:40.144 "data_size": 65536 00:12:40.144 } 00:12:40.144 ] 00:12:40.144 }' 00:12:40.144 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.144 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.144 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.144 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.144 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:40.144 04:01:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.144 04:01:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.144 [2024-11-18 04:01:36.661914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:40.144 [2024-11-18 04:01:36.681502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:40.144 04:01:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.144 04:01:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:40.144 [2024-11-18 04:01:36.683703] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:41.082 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.082 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.082 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.082 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.082 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.082 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.082 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.082 04:01:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.082 04:01:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.082 04:01:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.341 "name": "raid_bdev1", 00:12:41.341 "uuid": "87a5a2aa-2e6b-46bf-b46f-4ffb7efbca3e", 00:12:41.341 "strip_size_kb": 0, 00:12:41.341 "state": "online", 00:12:41.341 "raid_level": "raid1", 00:12:41.341 "superblock": false, 00:12:41.341 "num_base_bdevs": 2, 00:12:41.341 "num_base_bdevs_discovered": 2, 00:12:41.341 "num_base_bdevs_operational": 2, 00:12:41.341 "process": { 00:12:41.341 "type": "rebuild", 00:12:41.341 "target": "spare", 00:12:41.341 "progress": { 00:12:41.341 "blocks": 20480, 00:12:41.341 "percent": 31 00:12:41.341 } 00:12:41.341 }, 00:12:41.341 "base_bdevs_list": [ 00:12:41.341 { 00:12:41.341 "name": "spare", 00:12:41.341 "uuid": "f9b9cca6-c2d7-5d36-8233-862dbafd2e16", 00:12:41.341 "is_configured": true, 00:12:41.341 "data_offset": 0, 00:12:41.341 "data_size": 65536 00:12:41.341 }, 00:12:41.341 { 00:12:41.341 "name": "BaseBdev2", 00:12:41.341 "uuid": "a0313aaf-e4ab-58a2-b3c6-e0dbfae00d24", 00:12:41.341 "is_configured": true, 00:12:41.341 "data_offset": 0, 00:12:41.341 "data_size": 65536 00:12:41.341 } 00:12:41.341 ] 00:12:41.341 }' 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=371 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.341 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.341 "name": "raid_bdev1", 00:12:41.341 "uuid": "87a5a2aa-2e6b-46bf-b46f-4ffb7efbca3e", 00:12:41.341 "strip_size_kb": 0, 00:12:41.341 "state": "online", 00:12:41.341 "raid_level": "raid1", 00:12:41.341 "superblock": false, 00:12:41.341 "num_base_bdevs": 2, 00:12:41.341 "num_base_bdevs_discovered": 2, 00:12:41.341 "num_base_bdevs_operational": 2, 00:12:41.341 "process": { 00:12:41.341 "type": "rebuild", 00:12:41.341 "target": "spare", 00:12:41.341 "progress": { 00:12:41.342 "blocks": 22528, 00:12:41.342 "percent": 34 00:12:41.342 } 00:12:41.342 }, 00:12:41.342 "base_bdevs_list": [ 00:12:41.342 { 00:12:41.342 "name": "spare", 00:12:41.342 "uuid": "f9b9cca6-c2d7-5d36-8233-862dbafd2e16", 00:12:41.342 "is_configured": true, 00:12:41.342 "data_offset": 0, 00:12:41.342 "data_size": 65536 00:12:41.342 }, 00:12:41.342 { 00:12:41.342 "name": "BaseBdev2", 00:12:41.342 "uuid": "a0313aaf-e4ab-58a2-b3c6-e0dbfae00d24", 00:12:41.342 "is_configured": true, 00:12:41.342 "data_offset": 0, 00:12:41.342 "data_size": 65536 00:12:41.342 } 00:12:41.342 ] 00:12:41.342 }' 00:12:41.342 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.342 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.342 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.342 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.342 04:01:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:42.721 04:01:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.721 04:01:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.721 04:01:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.721 04:01:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.721 04:01:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.721 04:01:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.721 04:01:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.721 04:01:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.721 04:01:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.721 04:01:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.721 04:01:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.721 04:01:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.721 "name": "raid_bdev1", 00:12:42.721 "uuid": "87a5a2aa-2e6b-46bf-b46f-4ffb7efbca3e", 00:12:42.721 "strip_size_kb": 0, 00:12:42.721 "state": "online", 00:12:42.721 "raid_level": "raid1", 00:12:42.721 "superblock": false, 00:12:42.721 "num_base_bdevs": 2, 00:12:42.721 "num_base_bdevs_discovered": 2, 00:12:42.721 "num_base_bdevs_operational": 2, 00:12:42.721 "process": { 00:12:42.721 "type": "rebuild", 00:12:42.721 "target": "spare", 00:12:42.721 "progress": { 00:12:42.721 "blocks": 45056, 00:12:42.721 "percent": 68 00:12:42.721 } 00:12:42.721 }, 00:12:42.721 "base_bdevs_list": [ 00:12:42.721 { 00:12:42.721 "name": "spare", 00:12:42.721 "uuid": "f9b9cca6-c2d7-5d36-8233-862dbafd2e16", 00:12:42.721 "is_configured": true, 00:12:42.721 "data_offset": 0, 00:12:42.721 "data_size": 65536 00:12:42.721 }, 00:12:42.721 { 00:12:42.721 "name": "BaseBdev2", 00:12:42.721 "uuid": "a0313aaf-e4ab-58a2-b3c6-e0dbfae00d24", 00:12:42.721 "is_configured": true, 00:12:42.721 "data_offset": 0, 00:12:42.721 "data_size": 65536 00:12:42.721 } 00:12:42.721 ] 00:12:42.721 }' 00:12:42.721 04:01:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.721 04:01:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.721 04:01:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.721 04:01:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.721 04:01:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.290 [2024-11-18 04:01:39.911326] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:43.290 [2024-11-18 04:01:39.911434] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:43.290 [2024-11-18 04:01:39.911486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.549 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.549 "name": "raid_bdev1", 00:12:43.549 "uuid": "87a5a2aa-2e6b-46bf-b46f-4ffb7efbca3e", 00:12:43.549 "strip_size_kb": 0, 00:12:43.549 "state": "online", 00:12:43.549 "raid_level": "raid1", 00:12:43.549 "superblock": false, 00:12:43.549 "num_base_bdevs": 2, 00:12:43.549 "num_base_bdevs_discovered": 2, 00:12:43.549 "num_base_bdevs_operational": 2, 00:12:43.549 "base_bdevs_list": [ 00:12:43.549 { 00:12:43.549 "name": "spare", 00:12:43.549 "uuid": "f9b9cca6-c2d7-5d36-8233-862dbafd2e16", 00:12:43.549 "is_configured": true, 00:12:43.549 "data_offset": 0, 00:12:43.550 "data_size": 65536 00:12:43.550 }, 00:12:43.550 { 00:12:43.550 "name": "BaseBdev2", 00:12:43.550 "uuid": "a0313aaf-e4ab-58a2-b3c6-e0dbfae00d24", 00:12:43.550 "is_configured": true, 00:12:43.550 "data_offset": 0, 00:12:43.550 "data_size": 65536 00:12:43.550 } 00:12:43.550 ] 00:12:43.550 }' 00:12:43.550 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.809 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.809 "name": "raid_bdev1", 00:12:43.809 "uuid": "87a5a2aa-2e6b-46bf-b46f-4ffb7efbca3e", 00:12:43.809 "strip_size_kb": 0, 00:12:43.809 "state": "online", 00:12:43.809 "raid_level": "raid1", 00:12:43.809 "superblock": false, 00:12:43.809 "num_base_bdevs": 2, 00:12:43.809 "num_base_bdevs_discovered": 2, 00:12:43.809 "num_base_bdevs_operational": 2, 00:12:43.809 "base_bdevs_list": [ 00:12:43.809 { 00:12:43.809 "name": "spare", 00:12:43.809 "uuid": "f9b9cca6-c2d7-5d36-8233-862dbafd2e16", 00:12:43.809 "is_configured": true, 00:12:43.809 "data_offset": 0, 00:12:43.809 "data_size": 65536 00:12:43.809 }, 00:12:43.809 { 00:12:43.809 "name": "BaseBdev2", 00:12:43.809 "uuid": "a0313aaf-e4ab-58a2-b3c6-e0dbfae00d24", 00:12:43.809 "is_configured": true, 00:12:43.809 "data_offset": 0, 00:12:43.809 "data_size": 65536 00:12:43.809 } 00:12:43.810 ] 00:12:43.810 }' 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.810 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.069 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.069 "name": "raid_bdev1", 00:12:44.069 "uuid": "87a5a2aa-2e6b-46bf-b46f-4ffb7efbca3e", 00:12:44.069 "strip_size_kb": 0, 00:12:44.069 "state": "online", 00:12:44.069 "raid_level": "raid1", 00:12:44.069 "superblock": false, 00:12:44.069 "num_base_bdevs": 2, 00:12:44.069 "num_base_bdevs_discovered": 2, 00:12:44.069 "num_base_bdevs_operational": 2, 00:12:44.069 "base_bdevs_list": [ 00:12:44.069 { 00:12:44.069 "name": "spare", 00:12:44.069 "uuid": "f9b9cca6-c2d7-5d36-8233-862dbafd2e16", 00:12:44.069 "is_configured": true, 00:12:44.069 "data_offset": 0, 00:12:44.069 "data_size": 65536 00:12:44.069 }, 00:12:44.069 { 00:12:44.069 "name": "BaseBdev2", 00:12:44.069 "uuid": "a0313aaf-e4ab-58a2-b3c6-e0dbfae00d24", 00:12:44.069 "is_configured": true, 00:12:44.069 "data_offset": 0, 00:12:44.069 "data_size": 65536 00:12:44.069 } 00:12:44.069 ] 00:12:44.069 }' 00:12:44.069 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.069 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.329 [2024-11-18 04:01:40.852274] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:44.329 [2024-11-18 04:01:40.852318] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.329 [2024-11-18 04:01:40.852430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.329 [2024-11-18 04:01:40.852516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.329 [2024-11-18 04:01:40.852536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.329 04:01:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:44.589 /dev/nbd0 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.589 1+0 records in 00:12:44.589 1+0 records out 00:12:44.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041587 s, 9.8 MB/s 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.589 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:44.848 /dev/nbd1 00:12:44.848 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:44.848 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:44.848 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:44.848 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:44.848 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:44.848 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:44.848 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:44.848 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:44.848 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:44.848 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:44.848 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.848 1+0 records in 00:12:44.848 1+0 records out 00:12:44.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257367 s, 15.9 MB/s 00:12:44.849 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.849 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:44.849 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.849 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:44.849 04:01:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:44.849 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.849 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.849 04:01:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:45.151 04:01:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:45.151 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.151 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:45.151 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:45.151 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:45.151 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.151 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:45.410 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.410 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.410 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.410 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.410 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.410 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.411 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:45.411 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.411 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.411 04:01:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:45.670 04:01:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:45.670 04:01:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:45.670 04:01:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:45.670 04:01:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75287 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75287 ']' 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75287 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75287 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.671 killing process with pid 75287 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75287' 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75287 00:12:45.671 Received shutdown signal, test time was about 60.000000 seconds 00:12:45.671 00:12:45.671 Latency(us) 00:12:45.671 [2024-11-18T04:01:42.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.671 [2024-11-18T04:01:42.312Z] =================================================================================================================== 00:12:45.671 [2024-11-18T04:01:42.312Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:45.671 [2024-11-18 04:01:42.186142] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.671 04:01:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75287 00:12:45.930 [2024-11-18 04:01:42.539121] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:47.311 00:12:47.311 real 0m16.155s 00:12:47.311 user 0m17.992s 00:12:47.311 sys 0m3.459s 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.311 ************************************ 00:12:47.311 END TEST raid_rebuild_test 00:12:47.311 ************************************ 00:12:47.311 04:01:43 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:47.311 04:01:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:47.311 04:01:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.311 04:01:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:47.311 ************************************ 00:12:47.311 START TEST raid_rebuild_test_sb 00:12:47.311 ************************************ 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75716 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75716 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75716 ']' 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.311 04:01:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.571 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:47.571 Zero copy mechanism will not be used. 00:12:47.571 [2024-11-18 04:01:43.984673] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:47.571 [2024-11-18 04:01:43.984791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75716 ] 00:12:47.571 [2024-11-18 04:01:44.159838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.831 [2024-11-18 04:01:44.287125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.090 [2024-11-18 04:01:44.521464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.090 [2024-11-18 04:01:44.521515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.350 BaseBdev1_malloc 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.350 [2024-11-18 04:01:44.922893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:48.350 [2024-11-18 04:01:44.922974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.350 [2024-11-18 04:01:44.923004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:48.350 [2024-11-18 04:01:44.923018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.350 [2024-11-18 04:01:44.925654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.350 [2024-11-18 04:01:44.925696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:48.350 BaseBdev1 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.350 BaseBdev2_malloc 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.350 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.350 [2024-11-18 04:01:44.985101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:48.350 [2024-11-18 04:01:44.985165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.350 [2024-11-18 04:01:44.985188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:48.350 [2024-11-18 04:01:44.985203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.350 [2024-11-18 04:01:44.987663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.350 [2024-11-18 04:01:44.987702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:48.610 BaseBdev2 00:12:48.610 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.610 04:01:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:48.610 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.610 04:01:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.610 spare_malloc 00:12:48.610 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.611 spare_delay 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.611 [2024-11-18 04:01:45.071243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:48.611 [2024-11-18 04:01:45.071303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.611 [2024-11-18 04:01:45.071324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:48.611 [2024-11-18 04:01:45.071337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.611 [2024-11-18 04:01:45.073747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.611 [2024-11-18 04:01:45.073790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:48.611 spare 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.611 [2024-11-18 04:01:45.083290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.611 [2024-11-18 04:01:45.085329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.611 [2024-11-18 04:01:45.085523] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:48.611 [2024-11-18 04:01:45.085541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:48.611 [2024-11-18 04:01:45.085800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:48.611 [2024-11-18 04:01:45.085998] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:48.611 [2024-11-18 04:01:45.086014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:48.611 [2024-11-18 04:01:45.086191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.611 "name": "raid_bdev1", 00:12:48.611 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:12:48.611 "strip_size_kb": 0, 00:12:48.611 "state": "online", 00:12:48.611 "raid_level": "raid1", 00:12:48.611 "superblock": true, 00:12:48.611 "num_base_bdevs": 2, 00:12:48.611 "num_base_bdevs_discovered": 2, 00:12:48.611 "num_base_bdevs_operational": 2, 00:12:48.611 "base_bdevs_list": [ 00:12:48.611 { 00:12:48.611 "name": "BaseBdev1", 00:12:48.611 "uuid": "424d5137-db1c-58aa-bc50-2e3961e66173", 00:12:48.611 "is_configured": true, 00:12:48.611 "data_offset": 2048, 00:12:48.611 "data_size": 63488 00:12:48.611 }, 00:12:48.611 { 00:12:48.611 "name": "BaseBdev2", 00:12:48.611 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:12:48.611 "is_configured": true, 00:12:48.611 "data_offset": 2048, 00:12:48.611 "data_size": 63488 00:12:48.611 } 00:12:48.611 ] 00:12:48.611 }' 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.611 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.181 [2024-11-18 04:01:45.534783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:49.181 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:49.181 [2024-11-18 04:01:45.790236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:49.181 /dev/nbd0 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.441 1+0 records in 00:12:49.441 1+0 records out 00:12:49.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371166 s, 11.0 MB/s 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:49.441 04:01:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:53.643 63488+0 records in 00:12:53.643 63488+0 records out 00:12:53.643 32505856 bytes (33 MB, 31 MiB) copied, 3.87274 s, 8.4 MB/s 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:53.643 [2024-11-18 04:01:49.959957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.643 [2024-11-18 04:01:49.972038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:53.643 04:01:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.644 04:01:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.644 04:01:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.644 04:01:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.644 "name": "raid_bdev1", 00:12:53.644 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:12:53.644 "strip_size_kb": 0, 00:12:53.644 "state": "online", 00:12:53.644 "raid_level": "raid1", 00:12:53.644 "superblock": true, 00:12:53.644 "num_base_bdevs": 2, 00:12:53.644 "num_base_bdevs_discovered": 1, 00:12:53.644 "num_base_bdevs_operational": 1, 00:12:53.644 "base_bdevs_list": [ 00:12:53.644 { 00:12:53.644 "name": null, 00:12:53.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.644 "is_configured": false, 00:12:53.644 "data_offset": 0, 00:12:53.644 "data_size": 63488 00:12:53.644 }, 00:12:53.644 { 00:12:53.644 "name": "BaseBdev2", 00:12:53.644 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:12:53.644 "is_configured": true, 00:12:53.644 "data_offset": 2048, 00:12:53.644 "data_size": 63488 00:12:53.644 } 00:12:53.644 ] 00:12:53.644 }' 00:12:53.644 04:01:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.644 04:01:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.904 04:01:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:53.904 04:01:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.904 04:01:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.904 [2024-11-18 04:01:50.411327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.904 [2024-11-18 04:01:50.428884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:53.904 04:01:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.904 [2024-11-18 04:01:50.430779] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:53.904 04:01:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:54.843 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.843 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.844 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.844 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.844 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.844 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.844 04:01:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.844 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.844 04:01:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.844 04:01:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.104 "name": "raid_bdev1", 00:12:55.104 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:12:55.104 "strip_size_kb": 0, 00:12:55.104 "state": "online", 00:12:55.104 "raid_level": "raid1", 00:12:55.104 "superblock": true, 00:12:55.104 "num_base_bdevs": 2, 00:12:55.104 "num_base_bdevs_discovered": 2, 00:12:55.104 "num_base_bdevs_operational": 2, 00:12:55.104 "process": { 00:12:55.104 "type": "rebuild", 00:12:55.104 "target": "spare", 00:12:55.104 "progress": { 00:12:55.104 "blocks": 20480, 00:12:55.104 "percent": 32 00:12:55.104 } 00:12:55.104 }, 00:12:55.104 "base_bdevs_list": [ 00:12:55.104 { 00:12:55.104 "name": "spare", 00:12:55.104 "uuid": "41d553ac-e705-5d34-9aa9-c13980c31f49", 00:12:55.104 "is_configured": true, 00:12:55.104 "data_offset": 2048, 00:12:55.104 "data_size": 63488 00:12:55.104 }, 00:12:55.104 { 00:12:55.104 "name": "BaseBdev2", 00:12:55.104 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:12:55.104 "is_configured": true, 00:12:55.104 "data_offset": 2048, 00:12:55.104 "data_size": 63488 00:12:55.104 } 00:12:55.104 ] 00:12:55.104 }' 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.104 [2024-11-18 04:01:51.594071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.104 [2024-11-18 04:01:51.636557] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:55.104 [2024-11-18 04:01:51.636629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.104 [2024-11-18 04:01:51.636644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:55.104 [2024-11-18 04:01:51.636653] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.104 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.104 "name": "raid_bdev1", 00:12:55.104 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:12:55.104 "strip_size_kb": 0, 00:12:55.104 "state": "online", 00:12:55.104 "raid_level": "raid1", 00:12:55.104 "superblock": true, 00:12:55.104 "num_base_bdevs": 2, 00:12:55.104 "num_base_bdevs_discovered": 1, 00:12:55.104 "num_base_bdevs_operational": 1, 00:12:55.104 "base_bdevs_list": [ 00:12:55.104 { 00:12:55.104 "name": null, 00:12:55.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.104 "is_configured": false, 00:12:55.104 "data_offset": 0, 00:12:55.104 "data_size": 63488 00:12:55.104 }, 00:12:55.104 { 00:12:55.104 "name": "BaseBdev2", 00:12:55.104 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:12:55.105 "is_configured": true, 00:12:55.105 "data_offset": 2048, 00:12:55.105 "data_size": 63488 00:12:55.105 } 00:12:55.105 ] 00:12:55.105 }' 00:12:55.105 04:01:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.105 04:01:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.674 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:55.674 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.674 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:55.674 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:55.674 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.674 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.674 04:01:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.675 "name": "raid_bdev1", 00:12:55.675 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:12:55.675 "strip_size_kb": 0, 00:12:55.675 "state": "online", 00:12:55.675 "raid_level": "raid1", 00:12:55.675 "superblock": true, 00:12:55.675 "num_base_bdevs": 2, 00:12:55.675 "num_base_bdevs_discovered": 1, 00:12:55.675 "num_base_bdevs_operational": 1, 00:12:55.675 "base_bdevs_list": [ 00:12:55.675 { 00:12:55.675 "name": null, 00:12:55.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.675 "is_configured": false, 00:12:55.675 "data_offset": 0, 00:12:55.675 "data_size": 63488 00:12:55.675 }, 00:12:55.675 { 00:12:55.675 "name": "BaseBdev2", 00:12:55.675 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:12:55.675 "is_configured": true, 00:12:55.675 "data_offset": 2048, 00:12:55.675 "data_size": 63488 00:12:55.675 } 00:12:55.675 ] 00:12:55.675 }' 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.675 [2024-11-18 04:01:52.231981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.675 [2024-11-18 04:01:52.248507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.675 04:01:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:55.675 [2024-11-18 04:01:52.250378] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.056 "name": "raid_bdev1", 00:12:57.056 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:12:57.056 "strip_size_kb": 0, 00:12:57.056 "state": "online", 00:12:57.056 "raid_level": "raid1", 00:12:57.056 "superblock": true, 00:12:57.056 "num_base_bdevs": 2, 00:12:57.056 "num_base_bdevs_discovered": 2, 00:12:57.056 "num_base_bdevs_operational": 2, 00:12:57.056 "process": { 00:12:57.056 "type": "rebuild", 00:12:57.056 "target": "spare", 00:12:57.056 "progress": { 00:12:57.056 "blocks": 20480, 00:12:57.056 "percent": 32 00:12:57.056 } 00:12:57.056 }, 00:12:57.056 "base_bdevs_list": [ 00:12:57.056 { 00:12:57.056 "name": "spare", 00:12:57.056 "uuid": "41d553ac-e705-5d34-9aa9-c13980c31f49", 00:12:57.056 "is_configured": true, 00:12:57.056 "data_offset": 2048, 00:12:57.056 "data_size": 63488 00:12:57.056 }, 00:12:57.056 { 00:12:57.056 "name": "BaseBdev2", 00:12:57.056 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:12:57.056 "is_configured": true, 00:12:57.056 "data_offset": 2048, 00:12:57.056 "data_size": 63488 00:12:57.056 } 00:12:57.056 ] 00:12:57.056 }' 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:57.056 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=387 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.056 "name": "raid_bdev1", 00:12:57.056 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:12:57.056 "strip_size_kb": 0, 00:12:57.056 "state": "online", 00:12:57.056 "raid_level": "raid1", 00:12:57.056 "superblock": true, 00:12:57.056 "num_base_bdevs": 2, 00:12:57.056 "num_base_bdevs_discovered": 2, 00:12:57.056 "num_base_bdevs_operational": 2, 00:12:57.056 "process": { 00:12:57.056 "type": "rebuild", 00:12:57.056 "target": "spare", 00:12:57.056 "progress": { 00:12:57.056 "blocks": 22528, 00:12:57.056 "percent": 35 00:12:57.056 } 00:12:57.056 }, 00:12:57.056 "base_bdevs_list": [ 00:12:57.056 { 00:12:57.056 "name": "spare", 00:12:57.056 "uuid": "41d553ac-e705-5d34-9aa9-c13980c31f49", 00:12:57.056 "is_configured": true, 00:12:57.056 "data_offset": 2048, 00:12:57.056 "data_size": 63488 00:12:57.056 }, 00:12:57.056 { 00:12:57.056 "name": "BaseBdev2", 00:12:57.056 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:12:57.056 "is_configured": true, 00:12:57.056 "data_offset": 2048, 00:12:57.056 "data_size": 63488 00:12:57.056 } 00:12:57.056 ] 00:12:57.056 }' 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.056 04:01:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.995 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.995 "name": "raid_bdev1", 00:12:57.995 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:12:57.995 "strip_size_kb": 0, 00:12:57.995 "state": "online", 00:12:57.995 "raid_level": "raid1", 00:12:57.995 "superblock": true, 00:12:57.995 "num_base_bdevs": 2, 00:12:57.995 "num_base_bdevs_discovered": 2, 00:12:57.995 "num_base_bdevs_operational": 2, 00:12:57.995 "process": { 00:12:57.995 "type": "rebuild", 00:12:57.995 "target": "spare", 00:12:57.995 "progress": { 00:12:57.995 "blocks": 45056, 00:12:57.995 "percent": 70 00:12:57.995 } 00:12:57.995 }, 00:12:57.995 "base_bdevs_list": [ 00:12:57.995 { 00:12:57.995 "name": "spare", 00:12:57.995 "uuid": "41d553ac-e705-5d34-9aa9-c13980c31f49", 00:12:57.995 "is_configured": true, 00:12:57.995 "data_offset": 2048, 00:12:57.995 "data_size": 63488 00:12:57.996 }, 00:12:57.996 { 00:12:57.996 "name": "BaseBdev2", 00:12:57.996 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:12:57.996 "is_configured": true, 00:12:57.996 "data_offset": 2048, 00:12:57.996 "data_size": 63488 00:12:57.996 } 00:12:57.996 ] 00:12:57.996 }' 00:12:57.996 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.255 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.255 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.255 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.255 04:01:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:58.824 [2024-11-18 04:01:55.362992] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:58.824 [2024-11-18 04:01:55.363060] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:58.824 [2024-11-18 04:01:55.363145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.085 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.085 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.085 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.085 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.085 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.085 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.085 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.085 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.085 04:01:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.085 04:01:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.085 04:01:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.346 "name": "raid_bdev1", 00:12:59.346 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:12:59.346 "strip_size_kb": 0, 00:12:59.346 "state": "online", 00:12:59.346 "raid_level": "raid1", 00:12:59.346 "superblock": true, 00:12:59.346 "num_base_bdevs": 2, 00:12:59.346 "num_base_bdevs_discovered": 2, 00:12:59.346 "num_base_bdevs_operational": 2, 00:12:59.346 "base_bdevs_list": [ 00:12:59.346 { 00:12:59.346 "name": "spare", 00:12:59.346 "uuid": "41d553ac-e705-5d34-9aa9-c13980c31f49", 00:12:59.346 "is_configured": true, 00:12:59.346 "data_offset": 2048, 00:12:59.346 "data_size": 63488 00:12:59.346 }, 00:12:59.346 { 00:12:59.346 "name": "BaseBdev2", 00:12:59.346 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:12:59.346 "is_configured": true, 00:12:59.346 "data_offset": 2048, 00:12:59.346 "data_size": 63488 00:12:59.346 } 00:12:59.346 ] 00:12:59.346 }' 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.346 "name": "raid_bdev1", 00:12:59.346 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:12:59.346 "strip_size_kb": 0, 00:12:59.346 "state": "online", 00:12:59.346 "raid_level": "raid1", 00:12:59.346 "superblock": true, 00:12:59.346 "num_base_bdevs": 2, 00:12:59.346 "num_base_bdevs_discovered": 2, 00:12:59.346 "num_base_bdevs_operational": 2, 00:12:59.346 "base_bdevs_list": [ 00:12:59.346 { 00:12:59.346 "name": "spare", 00:12:59.346 "uuid": "41d553ac-e705-5d34-9aa9-c13980c31f49", 00:12:59.346 "is_configured": true, 00:12:59.346 "data_offset": 2048, 00:12:59.346 "data_size": 63488 00:12:59.346 }, 00:12:59.346 { 00:12:59.346 "name": "BaseBdev2", 00:12:59.346 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:12:59.346 "is_configured": true, 00:12:59.346 "data_offset": 2048, 00:12:59.346 "data_size": 63488 00:12:59.346 } 00:12:59.346 ] 00:12:59.346 }' 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.346 04:01:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.605 04:01:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.605 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.605 "name": "raid_bdev1", 00:12:59.605 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:12:59.605 "strip_size_kb": 0, 00:12:59.605 "state": "online", 00:12:59.605 "raid_level": "raid1", 00:12:59.605 "superblock": true, 00:12:59.605 "num_base_bdevs": 2, 00:12:59.605 "num_base_bdevs_discovered": 2, 00:12:59.605 "num_base_bdevs_operational": 2, 00:12:59.605 "base_bdevs_list": [ 00:12:59.605 { 00:12:59.605 "name": "spare", 00:12:59.605 "uuid": "41d553ac-e705-5d34-9aa9-c13980c31f49", 00:12:59.605 "is_configured": true, 00:12:59.605 "data_offset": 2048, 00:12:59.605 "data_size": 63488 00:12:59.605 }, 00:12:59.605 { 00:12:59.605 "name": "BaseBdev2", 00:12:59.605 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:12:59.605 "is_configured": true, 00:12:59.605 "data_offset": 2048, 00:12:59.605 "data_size": 63488 00:12:59.605 } 00:12:59.605 ] 00:12:59.605 }' 00:12:59.605 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.605 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.865 [2024-11-18 04:01:56.435400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:59.865 [2024-11-18 04:01:56.435436] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.865 [2024-11-18 04:01:56.435516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.865 [2024-11-18 04:01:56.435582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.865 [2024-11-18 04:01:56.435591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:59.865 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:00.126 /dev/nbd0 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.126 1+0 records in 00:13:00.126 1+0 records out 00:13:00.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371448 s, 11.0 MB/s 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:00.126 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:00.387 /dev/nbd1 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.387 1+0 records in 00:13:00.387 1+0 records out 00:13:00.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414836 s, 9.9 MB/s 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.387 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:00.388 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.388 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:00.388 04:01:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:00.388 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.388 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:00.388 04:01:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:00.647 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:00.647 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.647 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:00.647 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:00.647 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:00.647 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.647 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:00.906 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.906 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.906 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.906 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.906 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.906 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.906 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:00.906 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.906 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.906 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.164 [2024-11-18 04:01:57.629046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.164 [2024-11-18 04:01:57.629098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.164 [2024-11-18 04:01:57.629121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:01.164 [2024-11-18 04:01:57.629130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.164 [2024-11-18 04:01:57.631355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.164 [2024-11-18 04:01:57.631388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:01.164 [2024-11-18 04:01:57.631479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:01.164 [2024-11-18 04:01:57.631524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.164 [2024-11-18 04:01:57.631692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.164 spare 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.164 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.164 [2024-11-18 04:01:57.731589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:01.164 [2024-11-18 04:01:57.731646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:01.165 [2024-11-18 04:01:57.731925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:01.165 [2024-11-18 04:01:57.732113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:01.165 [2024-11-18 04:01:57.732128] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:01.165 [2024-11-18 04:01:57.732297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.165 "name": "raid_bdev1", 00:13:01.165 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:01.165 "strip_size_kb": 0, 00:13:01.165 "state": "online", 00:13:01.165 "raid_level": "raid1", 00:13:01.165 "superblock": true, 00:13:01.165 "num_base_bdevs": 2, 00:13:01.165 "num_base_bdevs_discovered": 2, 00:13:01.165 "num_base_bdevs_operational": 2, 00:13:01.165 "base_bdevs_list": [ 00:13:01.165 { 00:13:01.165 "name": "spare", 00:13:01.165 "uuid": "41d553ac-e705-5d34-9aa9-c13980c31f49", 00:13:01.165 "is_configured": true, 00:13:01.165 "data_offset": 2048, 00:13:01.165 "data_size": 63488 00:13:01.165 }, 00:13:01.165 { 00:13:01.165 "name": "BaseBdev2", 00:13:01.165 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:01.165 "is_configured": true, 00:13:01.165 "data_offset": 2048, 00:13:01.165 "data_size": 63488 00:13:01.165 } 00:13:01.165 ] 00:13:01.165 }' 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.165 04:01:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.732 "name": "raid_bdev1", 00:13:01.732 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:01.732 "strip_size_kb": 0, 00:13:01.732 "state": "online", 00:13:01.732 "raid_level": "raid1", 00:13:01.732 "superblock": true, 00:13:01.732 "num_base_bdevs": 2, 00:13:01.732 "num_base_bdevs_discovered": 2, 00:13:01.732 "num_base_bdevs_operational": 2, 00:13:01.732 "base_bdevs_list": [ 00:13:01.732 { 00:13:01.732 "name": "spare", 00:13:01.732 "uuid": "41d553ac-e705-5d34-9aa9-c13980c31f49", 00:13:01.732 "is_configured": true, 00:13:01.732 "data_offset": 2048, 00:13:01.732 "data_size": 63488 00:13:01.732 }, 00:13:01.732 { 00:13:01.732 "name": "BaseBdev2", 00:13:01.732 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:01.732 "is_configured": true, 00:13:01.732 "data_offset": 2048, 00:13:01.732 "data_size": 63488 00:13:01.732 } 00:13:01.732 ] 00:13:01.732 }' 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.732 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.733 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.733 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.733 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.733 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:01.733 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.733 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.733 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:01.733 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.733 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.733 [2024-11-18 04:01:58.371856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.991 "name": "raid_bdev1", 00:13:01.991 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:01.991 "strip_size_kb": 0, 00:13:01.991 "state": "online", 00:13:01.991 "raid_level": "raid1", 00:13:01.991 "superblock": true, 00:13:01.991 "num_base_bdevs": 2, 00:13:01.991 "num_base_bdevs_discovered": 1, 00:13:01.991 "num_base_bdevs_operational": 1, 00:13:01.991 "base_bdevs_list": [ 00:13:01.991 { 00:13:01.991 "name": null, 00:13:01.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.991 "is_configured": false, 00:13:01.991 "data_offset": 0, 00:13:01.991 "data_size": 63488 00:13:01.991 }, 00:13:01.991 { 00:13:01.991 "name": "BaseBdev2", 00:13:01.991 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:01.991 "is_configured": true, 00:13:01.991 "data_offset": 2048, 00:13:01.991 "data_size": 63488 00:13:01.991 } 00:13:01.991 ] 00:13:01.991 }' 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.991 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.250 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:02.250 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.250 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.250 [2024-11-18 04:01:58.827202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.250 [2024-11-18 04:01:58.827392] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:02.250 [2024-11-18 04:01:58.827409] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:02.250 [2024-11-18 04:01:58.827443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.250 [2024-11-18 04:01:58.843537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:02.250 04:01:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.250 04:01:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:02.250 [2024-11-18 04:01:58.845359] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.626 "name": "raid_bdev1", 00:13:03.626 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:03.626 "strip_size_kb": 0, 00:13:03.626 "state": "online", 00:13:03.626 "raid_level": "raid1", 00:13:03.626 "superblock": true, 00:13:03.626 "num_base_bdevs": 2, 00:13:03.626 "num_base_bdevs_discovered": 2, 00:13:03.626 "num_base_bdevs_operational": 2, 00:13:03.626 "process": { 00:13:03.626 "type": "rebuild", 00:13:03.626 "target": "spare", 00:13:03.626 "progress": { 00:13:03.626 "blocks": 20480, 00:13:03.626 "percent": 32 00:13:03.626 } 00:13:03.626 }, 00:13:03.626 "base_bdevs_list": [ 00:13:03.626 { 00:13:03.626 "name": "spare", 00:13:03.626 "uuid": "41d553ac-e705-5d34-9aa9-c13980c31f49", 00:13:03.626 "is_configured": true, 00:13:03.626 "data_offset": 2048, 00:13:03.626 "data_size": 63488 00:13:03.626 }, 00:13:03.626 { 00:13:03.626 "name": "BaseBdev2", 00:13:03.626 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:03.626 "is_configured": true, 00:13:03.626 "data_offset": 2048, 00:13:03.626 "data_size": 63488 00:13:03.626 } 00:13:03.626 ] 00:13:03.626 }' 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.626 04:01:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.626 [2024-11-18 04:01:59.985338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.626 [2024-11-18 04:02:00.050250] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:03.626 [2024-11-18 04:02:00.050301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.626 [2024-11-18 04:02:00.050314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.626 [2024-11-18 04:02:00.050323] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:03.626 04:02:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.626 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.627 "name": "raid_bdev1", 00:13:03.627 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:03.627 "strip_size_kb": 0, 00:13:03.627 "state": "online", 00:13:03.627 "raid_level": "raid1", 00:13:03.627 "superblock": true, 00:13:03.627 "num_base_bdevs": 2, 00:13:03.627 "num_base_bdevs_discovered": 1, 00:13:03.627 "num_base_bdevs_operational": 1, 00:13:03.627 "base_bdevs_list": [ 00:13:03.627 { 00:13:03.627 "name": null, 00:13:03.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.627 "is_configured": false, 00:13:03.627 "data_offset": 0, 00:13:03.627 "data_size": 63488 00:13:03.627 }, 00:13:03.627 { 00:13:03.627 "name": "BaseBdev2", 00:13:03.627 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:03.627 "is_configured": true, 00:13:03.627 "data_offset": 2048, 00:13:03.627 "data_size": 63488 00:13:03.627 } 00:13:03.627 ] 00:13:03.627 }' 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.627 04:02:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.195 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:04.195 04:02:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.195 04:02:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.195 [2024-11-18 04:02:00.535364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:04.195 [2024-11-18 04:02:00.535432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.195 [2024-11-18 04:02:00.535456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:04.195 [2024-11-18 04:02:00.535467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.195 [2024-11-18 04:02:00.535964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.195 [2024-11-18 04:02:00.535987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:04.195 [2024-11-18 04:02:00.536080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:04.195 [2024-11-18 04:02:00.536097] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:04.195 [2024-11-18 04:02:00.536107] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:04.195 [2024-11-18 04:02:00.536127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.195 [2024-11-18 04:02:00.551838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:04.195 spare 00:13:04.195 04:02:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.195 04:02:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:04.195 [2024-11-18 04:02:00.553661] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.132 "name": "raid_bdev1", 00:13:05.132 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:05.132 "strip_size_kb": 0, 00:13:05.132 "state": "online", 00:13:05.132 "raid_level": "raid1", 00:13:05.132 "superblock": true, 00:13:05.132 "num_base_bdevs": 2, 00:13:05.132 "num_base_bdevs_discovered": 2, 00:13:05.132 "num_base_bdevs_operational": 2, 00:13:05.132 "process": { 00:13:05.132 "type": "rebuild", 00:13:05.132 "target": "spare", 00:13:05.132 "progress": { 00:13:05.132 "blocks": 20480, 00:13:05.132 "percent": 32 00:13:05.132 } 00:13:05.132 }, 00:13:05.132 "base_bdevs_list": [ 00:13:05.132 { 00:13:05.132 "name": "spare", 00:13:05.132 "uuid": "41d553ac-e705-5d34-9aa9-c13980c31f49", 00:13:05.132 "is_configured": true, 00:13:05.132 "data_offset": 2048, 00:13:05.132 "data_size": 63488 00:13:05.132 }, 00:13:05.132 { 00:13:05.132 "name": "BaseBdev2", 00:13:05.132 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:05.132 "is_configured": true, 00:13:05.132 "data_offset": 2048, 00:13:05.132 "data_size": 63488 00:13:05.132 } 00:13:05.132 ] 00:13:05.132 }' 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.132 04:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.132 [2024-11-18 04:02:01.729496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.132 [2024-11-18 04:02:01.758494] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:05.132 [2024-11-18 04:02:01.758543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.132 [2024-11-18 04:02:01.758558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.132 [2024-11-18 04:02:01.758564] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.390 "name": "raid_bdev1", 00:13:05.390 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:05.390 "strip_size_kb": 0, 00:13:05.390 "state": "online", 00:13:05.390 "raid_level": "raid1", 00:13:05.390 "superblock": true, 00:13:05.390 "num_base_bdevs": 2, 00:13:05.390 "num_base_bdevs_discovered": 1, 00:13:05.390 "num_base_bdevs_operational": 1, 00:13:05.390 "base_bdevs_list": [ 00:13:05.390 { 00:13:05.390 "name": null, 00:13:05.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.390 "is_configured": false, 00:13:05.390 "data_offset": 0, 00:13:05.390 "data_size": 63488 00:13:05.390 }, 00:13:05.390 { 00:13:05.390 "name": "BaseBdev2", 00:13:05.390 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:05.390 "is_configured": true, 00:13:05.390 "data_offset": 2048, 00:13:05.390 "data_size": 63488 00:13:05.390 } 00:13:05.390 ] 00:13:05.390 }' 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.390 04:02:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.651 "name": "raid_bdev1", 00:13:05.651 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:05.651 "strip_size_kb": 0, 00:13:05.651 "state": "online", 00:13:05.651 "raid_level": "raid1", 00:13:05.651 "superblock": true, 00:13:05.651 "num_base_bdevs": 2, 00:13:05.651 "num_base_bdevs_discovered": 1, 00:13:05.651 "num_base_bdevs_operational": 1, 00:13:05.651 "base_bdevs_list": [ 00:13:05.651 { 00:13:05.651 "name": null, 00:13:05.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.651 "is_configured": false, 00:13:05.651 "data_offset": 0, 00:13:05.651 "data_size": 63488 00:13:05.651 }, 00:13:05.651 { 00:13:05.651 "name": "BaseBdev2", 00:13:05.651 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:05.651 "is_configured": true, 00:13:05.651 "data_offset": 2048, 00:13:05.651 "data_size": 63488 00:13:05.651 } 00:13:05.651 ] 00:13:05.651 }' 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.651 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.911 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.911 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.911 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:05.911 04:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.911 04:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.911 04:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.911 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:05.911 04:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.911 04:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.911 [2024-11-18 04:02:02.351100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:05.911 [2024-11-18 04:02:02.351152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.911 [2024-11-18 04:02:02.351175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:05.911 [2024-11-18 04:02:02.351195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.911 [2024-11-18 04:02:02.351657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.911 [2024-11-18 04:02:02.351684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:05.911 [2024-11-18 04:02:02.351765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:05.911 [2024-11-18 04:02:02.351783] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:05.911 [2024-11-18 04:02:02.351795] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:05.911 [2024-11-18 04:02:02.351804] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:05.911 BaseBdev1 00:13:05.911 04:02:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.911 04:02:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.851 "name": "raid_bdev1", 00:13:06.851 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:06.851 "strip_size_kb": 0, 00:13:06.851 "state": "online", 00:13:06.851 "raid_level": "raid1", 00:13:06.851 "superblock": true, 00:13:06.851 "num_base_bdevs": 2, 00:13:06.851 "num_base_bdevs_discovered": 1, 00:13:06.851 "num_base_bdevs_operational": 1, 00:13:06.851 "base_bdevs_list": [ 00:13:06.851 { 00:13:06.851 "name": null, 00:13:06.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.851 "is_configured": false, 00:13:06.851 "data_offset": 0, 00:13:06.851 "data_size": 63488 00:13:06.851 }, 00:13:06.851 { 00:13:06.851 "name": "BaseBdev2", 00:13:06.851 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:06.851 "is_configured": true, 00:13:06.851 "data_offset": 2048, 00:13:06.851 "data_size": 63488 00:13:06.851 } 00:13:06.851 ] 00:13:06.851 }' 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.851 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.420 "name": "raid_bdev1", 00:13:07.420 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:07.420 "strip_size_kb": 0, 00:13:07.420 "state": "online", 00:13:07.420 "raid_level": "raid1", 00:13:07.420 "superblock": true, 00:13:07.420 "num_base_bdevs": 2, 00:13:07.420 "num_base_bdevs_discovered": 1, 00:13:07.420 "num_base_bdevs_operational": 1, 00:13:07.420 "base_bdevs_list": [ 00:13:07.420 { 00:13:07.420 "name": null, 00:13:07.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.420 "is_configured": false, 00:13:07.420 "data_offset": 0, 00:13:07.420 "data_size": 63488 00:13:07.420 }, 00:13:07.420 { 00:13:07.420 "name": "BaseBdev2", 00:13:07.420 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:07.420 "is_configured": true, 00:13:07.420 "data_offset": 2048, 00:13:07.420 "data_size": 63488 00:13:07.420 } 00:13:07.420 ] 00:13:07.420 }' 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.420 [2024-11-18 04:02:03.904583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.420 [2024-11-18 04:02:03.904746] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:07.420 [2024-11-18 04:02:03.904760] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:07.420 request: 00:13:07.420 { 00:13:07.420 "base_bdev": "BaseBdev1", 00:13:07.420 "raid_bdev": "raid_bdev1", 00:13:07.420 "method": "bdev_raid_add_base_bdev", 00:13:07.420 "req_id": 1 00:13:07.420 } 00:13:07.420 Got JSON-RPC error response 00:13:07.420 response: 00:13:07.420 { 00:13:07.420 "code": -22, 00:13:07.420 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:07.420 } 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:07.420 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:07.421 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:07.421 04:02:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:07.421 04:02:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.376 "name": "raid_bdev1", 00:13:08.376 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:08.376 "strip_size_kb": 0, 00:13:08.376 "state": "online", 00:13:08.376 "raid_level": "raid1", 00:13:08.376 "superblock": true, 00:13:08.376 "num_base_bdevs": 2, 00:13:08.376 "num_base_bdevs_discovered": 1, 00:13:08.376 "num_base_bdevs_operational": 1, 00:13:08.376 "base_bdevs_list": [ 00:13:08.376 { 00:13:08.376 "name": null, 00:13:08.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.376 "is_configured": false, 00:13:08.376 "data_offset": 0, 00:13:08.376 "data_size": 63488 00:13:08.376 }, 00:13:08.376 { 00:13:08.376 "name": "BaseBdev2", 00:13:08.376 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:08.376 "is_configured": true, 00:13:08.376 "data_offset": 2048, 00:13:08.376 "data_size": 63488 00:13:08.376 } 00:13:08.376 ] 00:13:08.376 }' 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.376 04:02:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.947 "name": "raid_bdev1", 00:13:08.947 "uuid": "5aa180d6-32c2-41cb-aacc-feb5b43ab1db", 00:13:08.947 "strip_size_kb": 0, 00:13:08.947 "state": "online", 00:13:08.947 "raid_level": "raid1", 00:13:08.947 "superblock": true, 00:13:08.947 "num_base_bdevs": 2, 00:13:08.947 "num_base_bdevs_discovered": 1, 00:13:08.947 "num_base_bdevs_operational": 1, 00:13:08.947 "base_bdevs_list": [ 00:13:08.947 { 00:13:08.947 "name": null, 00:13:08.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.947 "is_configured": false, 00:13:08.947 "data_offset": 0, 00:13:08.947 "data_size": 63488 00:13:08.947 }, 00:13:08.947 { 00:13:08.947 "name": "BaseBdev2", 00:13:08.947 "uuid": "9478d098-a9a7-5b5c-86cd-785de905056d", 00:13:08.947 "is_configured": true, 00:13:08.947 "data_offset": 2048, 00:13:08.947 "data_size": 63488 00:13:08.947 } 00:13:08.947 ] 00:13:08.947 }' 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75716 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75716 ']' 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75716 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75716 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.947 killing process with pid 75716 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75716' 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75716 00:13:08.947 Received shutdown signal, test time was about 60.000000 seconds 00:13:08.947 00:13:08.947 Latency(us) 00:13:08.947 [2024-11-18T04:02:05.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.947 [2024-11-18T04:02:05.588Z] =================================================================================================================== 00:13:08.947 [2024-11-18T04:02:05.588Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:08.947 [2024-11-18 04:02:05.528485] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.947 [2024-11-18 04:02:05.528607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.947 04:02:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75716 00:13:08.947 [2024-11-18 04:02:05.528662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.947 [2024-11-18 04:02:05.528674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:09.207 [2024-11-18 04:02:05.813310] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:10.589 00:13:10.589 real 0m22.995s 00:13:10.589 user 0m28.214s 00:13:10.589 sys 0m3.551s 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.589 ************************************ 00:13:10.589 END TEST raid_rebuild_test_sb 00:13:10.589 ************************************ 00:13:10.589 04:02:06 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:10.589 04:02:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:10.589 04:02:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.589 04:02:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:10.589 ************************************ 00:13:10.589 START TEST raid_rebuild_test_io 00:13:10.589 ************************************ 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:10.589 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76435 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76435 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76435 ']' 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.590 04:02:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.590 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:10.590 Zero copy mechanism will not be used. 00:13:10.590 [2024-11-18 04:02:07.046809] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:10.590 [2024-11-18 04:02:07.046927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76435 ] 00:13:10.590 [2024-11-18 04:02:07.218380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.849 [2024-11-18 04:02:07.330370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.109 [2024-11-18 04:02:07.524220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.109 [2024-11-18 04:02:07.524284] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.368 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.368 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:11.368 04:02:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.368 04:02:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:11.368 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.368 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.368 BaseBdev1_malloc 00:13:11.368 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.369 [2024-11-18 04:02:07.911618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:11.369 [2024-11-18 04:02:07.911681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.369 [2024-11-18 04:02:07.911705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:11.369 [2024-11-18 04:02:07.911716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.369 [2024-11-18 04:02:07.913803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.369 [2024-11-18 04:02:07.913855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.369 BaseBdev1 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.369 BaseBdev2_malloc 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.369 [2024-11-18 04:02:07.968186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:11.369 [2024-11-18 04:02:07.968244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.369 [2024-11-18 04:02:07.968262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:11.369 [2024-11-18 04:02:07.968272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.369 [2024-11-18 04:02:07.970280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.369 [2024-11-18 04:02:07.970315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:11.369 BaseBdev2 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.369 04:02:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.628 spare_malloc 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.628 spare_delay 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.628 [2024-11-18 04:02:08.048747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:11.628 [2024-11-18 04:02:08.048801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.628 [2024-11-18 04:02:08.048820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:11.628 [2024-11-18 04:02:08.048843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.628 [2024-11-18 04:02:08.050987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.628 [2024-11-18 04:02:08.051020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:11.628 spare 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.628 [2024-11-18 04:02:08.060787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.628 [2024-11-18 04:02:08.062549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.628 [2024-11-18 04:02:08.062635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:11.628 [2024-11-18 04:02:08.062649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:11.628 [2024-11-18 04:02:08.062895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:11.628 [2024-11-18 04:02:08.063050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:11.628 [2024-11-18 04:02:08.063065] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:11.628 [2024-11-18 04:02:08.063218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.628 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.629 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.629 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.629 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.629 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.629 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.629 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.629 "name": "raid_bdev1", 00:13:11.629 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:11.629 "strip_size_kb": 0, 00:13:11.629 "state": "online", 00:13:11.629 "raid_level": "raid1", 00:13:11.629 "superblock": false, 00:13:11.629 "num_base_bdevs": 2, 00:13:11.629 "num_base_bdevs_discovered": 2, 00:13:11.629 "num_base_bdevs_operational": 2, 00:13:11.629 "base_bdevs_list": [ 00:13:11.629 { 00:13:11.629 "name": "BaseBdev1", 00:13:11.629 "uuid": "3065d434-7a24-5a6c-9f16-1d5072263354", 00:13:11.629 "is_configured": true, 00:13:11.629 "data_offset": 0, 00:13:11.629 "data_size": 65536 00:13:11.629 }, 00:13:11.629 { 00:13:11.629 "name": "BaseBdev2", 00:13:11.629 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:11.629 "is_configured": true, 00:13:11.629 "data_offset": 0, 00:13:11.629 "data_size": 65536 00:13:11.629 } 00:13:11.629 ] 00:13:11.629 }' 00:13:11.629 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.629 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.887 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.888 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:11.888 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.888 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.888 [2024-11-18 04:02:08.508286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.147 [2024-11-18 04:02:08.579891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.147 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.147 "name": "raid_bdev1", 00:13:12.147 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:12.147 "strip_size_kb": 0, 00:13:12.147 "state": "online", 00:13:12.147 "raid_level": "raid1", 00:13:12.147 "superblock": false, 00:13:12.147 "num_base_bdevs": 2, 00:13:12.147 "num_base_bdevs_discovered": 1, 00:13:12.147 "num_base_bdevs_operational": 1, 00:13:12.147 "base_bdevs_list": [ 00:13:12.147 { 00:13:12.147 "name": null, 00:13:12.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.147 "is_configured": false, 00:13:12.147 "data_offset": 0, 00:13:12.147 "data_size": 65536 00:13:12.147 }, 00:13:12.147 { 00:13:12.147 "name": "BaseBdev2", 00:13:12.147 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:12.147 "is_configured": true, 00:13:12.147 "data_offset": 0, 00:13:12.147 "data_size": 65536 00:13:12.148 } 00:13:12.148 ] 00:13:12.148 }' 00:13:12.148 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.148 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.148 [2024-11-18 04:02:08.676026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:12.148 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:12.148 Zero copy mechanism will not be used. 00:13:12.148 Running I/O for 60 seconds... 00:13:12.407 04:02:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.407 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.407 04:02:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.407 [2024-11-18 04:02:09.008293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.407 04:02:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.407 04:02:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:12.667 [2024-11-18 04:02:09.059122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:12.667 [2024-11-18 04:02:09.060990] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.667 [2024-11-18 04:02:09.162304] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:12.667 [2024-11-18 04:02:09.162808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:12.667 [2024-11-18 04:02:09.272292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:12.667 [2024-11-18 04:02:09.272509] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:13.236 [2024-11-18 04:02:09.601643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:13.236 149.00 IOPS, 447.00 MiB/s [2024-11-18T04:02:09.877Z] [2024-11-18 04:02:09.746130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.496 [2024-11-18 04:02:10.080805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.496 "name": "raid_bdev1", 00:13:13.496 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:13.496 "strip_size_kb": 0, 00:13:13.496 "state": "online", 00:13:13.496 "raid_level": "raid1", 00:13:13.496 "superblock": false, 00:13:13.496 "num_base_bdevs": 2, 00:13:13.496 "num_base_bdevs_discovered": 2, 00:13:13.496 "num_base_bdevs_operational": 2, 00:13:13.496 "process": { 00:13:13.496 "type": "rebuild", 00:13:13.496 "target": "spare", 00:13:13.496 "progress": { 00:13:13.496 "blocks": 12288, 00:13:13.496 "percent": 18 00:13:13.496 } 00:13:13.496 }, 00:13:13.496 "base_bdevs_list": [ 00:13:13.496 { 00:13:13.496 "name": "spare", 00:13:13.496 "uuid": "d882ff62-0047-59a2-932d-c7162bed0a0e", 00:13:13.496 "is_configured": true, 00:13:13.496 "data_offset": 0, 00:13:13.496 "data_size": 65536 00:13:13.496 }, 00:13:13.496 { 00:13:13.496 "name": "BaseBdev2", 00:13:13.496 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:13.496 "is_configured": true, 00:13:13.496 "data_offset": 0, 00:13:13.496 "data_size": 65536 00:13:13.496 } 00:13:13.496 ] 00:13:13.496 }' 00:13:13.496 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.757 [2024-11-18 04:02:10.187968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.757 [2024-11-18 04:02:10.194235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:13.757 [2024-11-18 04:02:10.207798] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:13.757 [2024-11-18 04:02:10.210192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.757 [2024-11-18 04:02:10.210226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.757 [2024-11-18 04:02:10.210239] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:13.757 [2024-11-18 04:02:10.257785] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.757 "name": "raid_bdev1", 00:13:13.757 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:13.757 "strip_size_kb": 0, 00:13:13.757 "state": "online", 00:13:13.757 "raid_level": "raid1", 00:13:13.757 "superblock": false, 00:13:13.757 "num_base_bdevs": 2, 00:13:13.757 "num_base_bdevs_discovered": 1, 00:13:13.757 "num_base_bdevs_operational": 1, 00:13:13.757 "base_bdevs_list": [ 00:13:13.757 { 00:13:13.757 "name": null, 00:13:13.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.757 "is_configured": false, 00:13:13.757 "data_offset": 0, 00:13:13.757 "data_size": 65536 00:13:13.757 }, 00:13:13.757 { 00:13:13.757 "name": "BaseBdev2", 00:13:13.757 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:13.757 "is_configured": true, 00:13:13.757 "data_offset": 0, 00:13:13.757 "data_size": 65536 00:13:13.757 } 00:13:13.757 ] 00:13:13.757 }' 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.757 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.327 162.50 IOPS, 487.50 MiB/s [2024-11-18T04:02:10.968Z] 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.327 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.327 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.327 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.327 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.327 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.327 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.327 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.327 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.327 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.327 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.327 "name": "raid_bdev1", 00:13:14.327 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:14.327 "strip_size_kb": 0, 00:13:14.327 "state": "online", 00:13:14.327 "raid_level": "raid1", 00:13:14.327 "superblock": false, 00:13:14.327 "num_base_bdevs": 2, 00:13:14.327 "num_base_bdevs_discovered": 1, 00:13:14.328 "num_base_bdevs_operational": 1, 00:13:14.328 "base_bdevs_list": [ 00:13:14.328 { 00:13:14.328 "name": null, 00:13:14.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.328 "is_configured": false, 00:13:14.328 "data_offset": 0, 00:13:14.328 "data_size": 65536 00:13:14.328 }, 00:13:14.328 { 00:13:14.328 "name": "BaseBdev2", 00:13:14.328 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:14.328 "is_configured": true, 00:13:14.328 "data_offset": 0, 00:13:14.328 "data_size": 65536 00:13:14.328 } 00:13:14.328 ] 00:13:14.328 }' 00:13:14.328 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.328 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.328 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.328 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.328 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:14.328 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.328 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.328 [2024-11-18 04:02:10.892999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:14.328 04:02:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.328 04:02:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:14.328 [2024-11-18 04:02:10.931103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:14.328 [2024-11-18 04:02:10.932956] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.587 [2024-11-18 04:02:11.051879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:14.587 [2024-11-18 04:02:11.052332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:14.847 [2024-11-18 04:02:11.253502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:14.847 [2024-11-18 04:02:11.253846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:15.107 [2024-11-18 04:02:11.493396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:15.107 163.00 IOPS, 489.00 MiB/s [2024-11-18T04:02:11.748Z] [2024-11-18 04:02:11.708531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.367 [2024-11-18 04:02:11.926674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.367 "name": "raid_bdev1", 00:13:15.367 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:15.367 "strip_size_kb": 0, 00:13:15.367 "state": "online", 00:13:15.367 "raid_level": "raid1", 00:13:15.367 "superblock": false, 00:13:15.367 "num_base_bdevs": 2, 00:13:15.367 "num_base_bdevs_discovered": 2, 00:13:15.367 "num_base_bdevs_operational": 2, 00:13:15.367 "process": { 00:13:15.367 "type": "rebuild", 00:13:15.367 "target": "spare", 00:13:15.367 "progress": { 00:13:15.367 "blocks": 14336, 00:13:15.367 "percent": 21 00:13:15.367 } 00:13:15.367 }, 00:13:15.367 "base_bdevs_list": [ 00:13:15.367 { 00:13:15.367 "name": "spare", 00:13:15.367 "uuid": "d882ff62-0047-59a2-932d-c7162bed0a0e", 00:13:15.367 "is_configured": true, 00:13:15.367 "data_offset": 0, 00:13:15.367 "data_size": 65536 00:13:15.367 }, 00:13:15.367 { 00:13:15.367 "name": "BaseBdev2", 00:13:15.367 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:15.367 "is_configured": true, 00:13:15.367 "data_offset": 0, 00:13:15.367 "data_size": 65536 00:13:15.367 } 00:13:15.367 ] 00:13:15.367 }' 00:13:15.367 04:02:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=406 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.627 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.627 "name": "raid_bdev1", 00:13:15.627 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:15.627 "strip_size_kb": 0, 00:13:15.627 "state": "online", 00:13:15.627 "raid_level": "raid1", 00:13:15.627 "superblock": false, 00:13:15.628 "num_base_bdevs": 2, 00:13:15.628 "num_base_bdevs_discovered": 2, 00:13:15.628 "num_base_bdevs_operational": 2, 00:13:15.628 "process": { 00:13:15.628 "type": "rebuild", 00:13:15.628 "target": "spare", 00:13:15.628 "progress": { 00:13:15.628 "blocks": 14336, 00:13:15.628 "percent": 21 00:13:15.628 } 00:13:15.628 }, 00:13:15.628 "base_bdevs_list": [ 00:13:15.628 { 00:13:15.628 "name": "spare", 00:13:15.628 "uuid": "d882ff62-0047-59a2-932d-c7162bed0a0e", 00:13:15.628 "is_configured": true, 00:13:15.628 "data_offset": 0, 00:13:15.628 "data_size": 65536 00:13:15.628 }, 00:13:15.628 { 00:13:15.628 "name": "BaseBdev2", 00:13:15.628 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:15.628 "is_configured": true, 00:13:15.628 "data_offset": 0, 00:13:15.628 "data_size": 65536 00:13:15.628 } 00:13:15.628 ] 00:13:15.628 }' 00:13:15.628 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.628 [2024-11-18 04:02:12.140035] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:15.628 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.628 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.628 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.628 04:02:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.888 [2024-11-18 04:02:12.484357] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:16.414 138.75 IOPS, 416.25 MiB/s [2024-11-18T04:02:13.055Z] [2024-11-18 04:02:12.855139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:16.685 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.685 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.685 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.685 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.685 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.685 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.686 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.686 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.686 04:02:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.686 04:02:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.686 04:02:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.686 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.686 "name": "raid_bdev1", 00:13:16.686 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:16.686 "strip_size_kb": 0, 00:13:16.686 "state": "online", 00:13:16.686 "raid_level": "raid1", 00:13:16.686 "superblock": false, 00:13:16.686 "num_base_bdevs": 2, 00:13:16.686 "num_base_bdevs_discovered": 2, 00:13:16.686 "num_base_bdevs_operational": 2, 00:13:16.686 "process": { 00:13:16.686 "type": "rebuild", 00:13:16.686 "target": "spare", 00:13:16.686 "progress": { 00:13:16.686 "blocks": 32768, 00:13:16.686 "percent": 50 00:13:16.686 } 00:13:16.686 }, 00:13:16.686 "base_bdevs_list": [ 00:13:16.686 { 00:13:16.686 "name": "spare", 00:13:16.686 "uuid": "d882ff62-0047-59a2-932d-c7162bed0a0e", 00:13:16.686 "is_configured": true, 00:13:16.686 "data_offset": 0, 00:13:16.686 "data_size": 65536 00:13:16.686 }, 00:13:16.686 { 00:13:16.686 "name": "BaseBdev2", 00:13:16.686 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:16.686 "is_configured": true, 00:13:16.686 "data_offset": 0, 00:13:16.686 "data_size": 65536 00:13:16.686 } 00:13:16.686 ] 00:13:16.686 }' 00:13:16.686 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.686 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.686 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.946 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.946 04:02:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:16.946 [2024-11-18 04:02:13.502079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:17.206 121.80 IOPS, 365.40 MiB/s [2024-11-18T04:02:13.847Z] [2024-11-18 04:02:13.823633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:17.776 [2024-11-18 04:02:14.166184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:17.776 [2024-11-18 04:02:14.279804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.776 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.776 "name": "raid_bdev1", 00:13:17.776 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:17.776 "strip_size_kb": 0, 00:13:17.776 "state": "online", 00:13:17.776 "raid_level": "raid1", 00:13:17.777 "superblock": false, 00:13:17.777 "num_base_bdevs": 2, 00:13:17.777 "num_base_bdevs_discovered": 2, 00:13:17.777 "num_base_bdevs_operational": 2, 00:13:17.777 "process": { 00:13:17.777 "type": "rebuild", 00:13:17.777 "target": "spare", 00:13:17.777 "progress": { 00:13:17.777 "blocks": 53248, 00:13:17.777 "percent": 81 00:13:17.777 } 00:13:17.777 }, 00:13:17.777 "base_bdevs_list": [ 00:13:17.777 { 00:13:17.777 "name": "spare", 00:13:17.777 "uuid": "d882ff62-0047-59a2-932d-c7162bed0a0e", 00:13:17.777 "is_configured": true, 00:13:17.777 "data_offset": 0, 00:13:17.777 "data_size": 65536 00:13:17.777 }, 00:13:17.777 { 00:13:17.777 "name": "BaseBdev2", 00:13:17.777 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:17.777 "is_configured": true, 00:13:17.777 "data_offset": 0, 00:13:17.777 "data_size": 65536 00:13:17.777 } 00:13:17.777 ] 00:13:17.777 }' 00:13:17.777 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.037 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.037 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.037 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.037 04:02:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:18.606 108.33 IOPS, 325.00 MiB/s [2024-11-18T04:02:15.247Z] [2024-11-18 04:02:15.029108] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:18.606 [2024-11-18 04:02:15.134004] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:18.606 [2024-11-18 04:02:15.135761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.177 "name": "raid_bdev1", 00:13:19.177 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:19.177 "strip_size_kb": 0, 00:13:19.177 "state": "online", 00:13:19.177 "raid_level": "raid1", 00:13:19.177 "superblock": false, 00:13:19.177 "num_base_bdevs": 2, 00:13:19.177 "num_base_bdevs_discovered": 2, 00:13:19.177 "num_base_bdevs_operational": 2, 00:13:19.177 "base_bdevs_list": [ 00:13:19.177 { 00:13:19.177 "name": "spare", 00:13:19.177 "uuid": "d882ff62-0047-59a2-932d-c7162bed0a0e", 00:13:19.177 "is_configured": true, 00:13:19.177 "data_offset": 0, 00:13:19.177 "data_size": 65536 00:13:19.177 }, 00:13:19.177 { 00:13:19.177 "name": "BaseBdev2", 00:13:19.177 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:19.177 "is_configured": true, 00:13:19.177 "data_offset": 0, 00:13:19.177 "data_size": 65536 00:13:19.177 } 00:13:19.177 ] 00:13:19.177 }' 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.177 97.86 IOPS, 293.57 MiB/s [2024-11-18T04:02:15.818Z] 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.177 "name": "raid_bdev1", 00:13:19.177 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:19.177 "strip_size_kb": 0, 00:13:19.177 "state": "online", 00:13:19.177 "raid_level": "raid1", 00:13:19.177 "superblock": false, 00:13:19.177 "num_base_bdevs": 2, 00:13:19.177 "num_base_bdevs_discovered": 2, 00:13:19.177 "num_base_bdevs_operational": 2, 00:13:19.177 "base_bdevs_list": [ 00:13:19.177 { 00:13:19.177 "name": "spare", 00:13:19.177 "uuid": "d882ff62-0047-59a2-932d-c7162bed0a0e", 00:13:19.177 "is_configured": true, 00:13:19.177 "data_offset": 0, 00:13:19.177 "data_size": 65536 00:13:19.177 }, 00:13:19.177 { 00:13:19.177 "name": "BaseBdev2", 00:13:19.177 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:19.177 "is_configured": true, 00:13:19.177 "data_offset": 0, 00:13:19.177 "data_size": 65536 00:13:19.177 } 00:13:19.177 ] 00:13:19.177 }' 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.177 04:02:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.437 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.437 "name": "raid_bdev1", 00:13:19.437 "uuid": "af3df607-07e4-4769-8406-2439ad0fdf4f", 00:13:19.437 "strip_size_kb": 0, 00:13:19.437 "state": "online", 00:13:19.437 "raid_level": "raid1", 00:13:19.437 "superblock": false, 00:13:19.437 "num_base_bdevs": 2, 00:13:19.437 "num_base_bdevs_discovered": 2, 00:13:19.437 "num_base_bdevs_operational": 2, 00:13:19.437 "base_bdevs_list": [ 00:13:19.437 { 00:13:19.437 "name": "spare", 00:13:19.437 "uuid": "d882ff62-0047-59a2-932d-c7162bed0a0e", 00:13:19.437 "is_configured": true, 00:13:19.437 "data_offset": 0, 00:13:19.437 "data_size": 65536 00:13:19.437 }, 00:13:19.437 { 00:13:19.437 "name": "BaseBdev2", 00:13:19.437 "uuid": "2990d0f3-6a15-53cd-aefb-7b52da55abcd", 00:13:19.437 "is_configured": true, 00:13:19.437 "data_offset": 0, 00:13:19.437 "data_size": 65536 00:13:19.437 } 00:13:19.437 ] 00:13:19.437 }' 00:13:19.437 04:02:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.437 04:02:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.697 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:19.697 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.697 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.697 [2024-11-18 04:02:16.219581] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:19.697 [2024-11-18 04:02:16.219613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.697 00:13:19.697 Latency(us) 00:13:19.697 [2024-11-18T04:02:16.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.697 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:19.697 raid_bdev1 : 7.66 93.00 278.99 0.00 0.00 14893.31 296.92 107604.96 00:13:19.697 [2024-11-18T04:02:16.338Z] =================================================================================================================== 00:13:19.697 [2024-11-18T04:02:16.338Z] Total : 93.00 278.99 0.00 0.00 14893.31 296.92 107604.96 00:13:19.957 [2024-11-18 04:02:16.340564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.957 [2024-11-18 04:02:16.340611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.957 [2024-11-18 04:02:16.340690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.957 [2024-11-18 04:02:16.340699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:19.957 { 00:13:19.957 "results": [ 00:13:19.957 { 00:13:19.957 "job": "raid_bdev1", 00:13:19.957 "core_mask": "0x1", 00:13:19.957 "workload": "randrw", 00:13:19.957 "percentage": 50, 00:13:19.957 "status": "finished", 00:13:19.957 "queue_depth": 2, 00:13:19.957 "io_size": 3145728, 00:13:19.957 "runtime": 7.656141, 00:13:19.957 "iops": 92.99724234441346, 00:13:19.957 "mibps": 278.9917270332404, 00:13:19.957 "io_failed": 0, 00:13:19.957 "io_timeout": 0, 00:13:19.957 "avg_latency_us": 14893.310279181589, 00:13:19.957 "min_latency_us": 296.91528384279474, 00:13:19.957 "max_latency_us": 107604.96069868996 00:13:19.957 } 00:13:19.957 ], 00:13:19.957 "core_count": 1 00:13:19.957 } 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:19.957 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:19.957 /dev/nbd0 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.218 1+0 records in 00:13:20.218 1+0 records out 00:13:20.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376118 s, 10.9 MB/s 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:20.218 /dev/nbd1 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:20.218 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:20.478 1+0 records in 00:13:20.478 1+0 records out 00:13:20.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376369 s, 10.9 MB/s 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.478 04:02:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:20.478 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:20.478 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.478 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:20.478 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.478 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:20.478 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.478 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.738 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76435 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76435 ']' 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76435 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76435 00:13:20.998 killing process with pid 76435 00:13:20.998 Received shutdown signal, test time was about 8.858518 seconds 00:13:20.998 00:13:20.998 Latency(us) 00:13:20.998 [2024-11-18T04:02:17.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.998 [2024-11-18T04:02:17.639Z] =================================================================================================================== 00:13:20.998 [2024-11-18T04:02:17.639Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76435' 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76435 00:13:20.998 04:02:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76435 00:13:20.998 [2024-11-18 04:02:17.519401] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.257 [2024-11-18 04:02:17.759593] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:22.640 04:02:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:22.640 ************************************ 00:13:22.640 END TEST raid_rebuild_test_io 00:13:22.640 ************************************ 00:13:22.640 00:13:22.640 real 0m12.050s 00:13:22.640 user 0m15.116s 00:13:22.640 sys 0m1.407s 00:13:22.640 04:02:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.640 04:02:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.640 04:02:19 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:22.640 04:02:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:22.640 04:02:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.640 04:02:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:22.640 ************************************ 00:13:22.640 START TEST raid_rebuild_test_sb_io 00:13:22.640 ************************************ 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:22.640 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76811 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76811 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76811 ']' 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.641 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.641 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:22.641 Zero copy mechanism will not be used. 00:13:22.641 [2024-11-18 04:02:19.167410] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:22.641 [2024-11-18 04:02:19.167521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76811 ] 00:13:22.900 [2024-11-18 04:02:19.341168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.900 [2024-11-18 04:02:19.454183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.159 [2024-11-18 04:02:19.648534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.159 [2024-11-18 04:02:19.648566] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.419 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.419 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:23.419 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:23.419 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:23.419 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.419 04:02:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.419 BaseBdev1_malloc 00:13:23.419 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.419 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:23.419 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.419 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.419 [2024-11-18 04:02:20.030427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:23.419 [2024-11-18 04:02:20.030491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.419 [2024-11-18 04:02:20.030515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:23.419 [2024-11-18 04:02:20.030525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.419 [2024-11-18 04:02:20.032544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.419 [2024-11-18 04:02:20.032583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:23.419 BaseBdev1 00:13:23.419 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.419 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:23.419 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:23.419 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.419 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.679 BaseBdev2_malloc 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.679 [2024-11-18 04:02:20.083871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:23.679 [2024-11-18 04:02:20.083918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.679 [2024-11-18 04:02:20.083936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:23.679 [2024-11-18 04:02:20.083948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.679 [2024-11-18 04:02:20.085906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.679 [2024-11-18 04:02:20.085942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:23.679 BaseBdev2 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.679 spare_malloc 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.679 spare_delay 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.679 [2024-11-18 04:02:20.162523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:23.679 [2024-11-18 04:02:20.162580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.679 [2024-11-18 04:02:20.162599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:23.679 [2024-11-18 04:02:20.162610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.679 [2024-11-18 04:02:20.164701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.679 [2024-11-18 04:02:20.164744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:23.679 spare 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.679 [2024-11-18 04:02:20.174564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.679 [2024-11-18 04:02:20.176508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.679 [2024-11-18 04:02:20.176671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:23.679 [2024-11-18 04:02:20.176687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:23.679 [2024-11-18 04:02:20.176937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:23.679 [2024-11-18 04:02:20.177113] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:23.679 [2024-11-18 04:02:20.177130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:23.679 [2024-11-18 04:02:20.177302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:23.679 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.680 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.680 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.680 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.680 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.680 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.680 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.680 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.680 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.680 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.680 "name": "raid_bdev1", 00:13:23.680 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:23.680 "strip_size_kb": 0, 00:13:23.680 "state": "online", 00:13:23.680 "raid_level": "raid1", 00:13:23.680 "superblock": true, 00:13:23.680 "num_base_bdevs": 2, 00:13:23.680 "num_base_bdevs_discovered": 2, 00:13:23.680 "num_base_bdevs_operational": 2, 00:13:23.680 "base_bdevs_list": [ 00:13:23.680 { 00:13:23.680 "name": "BaseBdev1", 00:13:23.680 "uuid": "e25a8212-5a6b-5318-89ea-8f8361165fdf", 00:13:23.680 "is_configured": true, 00:13:23.680 "data_offset": 2048, 00:13:23.680 "data_size": 63488 00:13:23.680 }, 00:13:23.680 { 00:13:23.680 "name": "BaseBdev2", 00:13:23.680 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:23.680 "is_configured": true, 00:13:23.680 "data_offset": 2048, 00:13:23.680 "data_size": 63488 00:13:23.680 } 00:13:23.680 ] 00:13:23.680 }' 00:13:23.680 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.680 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.258 [2024-11-18 04:02:20.654026] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.258 [2024-11-18 04:02:20.749537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.258 "name": "raid_bdev1", 00:13:24.258 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:24.258 "strip_size_kb": 0, 00:13:24.258 "state": "online", 00:13:24.258 "raid_level": "raid1", 00:13:24.258 "superblock": true, 00:13:24.258 "num_base_bdevs": 2, 00:13:24.258 "num_base_bdevs_discovered": 1, 00:13:24.258 "num_base_bdevs_operational": 1, 00:13:24.258 "base_bdevs_list": [ 00:13:24.258 { 00:13:24.258 "name": null, 00:13:24.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.258 "is_configured": false, 00:13:24.258 "data_offset": 0, 00:13:24.258 "data_size": 63488 00:13:24.258 }, 00:13:24.258 { 00:13:24.258 "name": "BaseBdev2", 00:13:24.258 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:24.258 "is_configured": true, 00:13:24.258 "data_offset": 2048, 00:13:24.258 "data_size": 63488 00:13:24.258 } 00:13:24.258 ] 00:13:24.258 }' 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.258 04:02:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.258 [2024-11-18 04:02:20.845089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:24.258 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:24.258 Zero copy mechanism will not be used. 00:13:24.258 Running I/O for 60 seconds... 00:13:24.846 04:02:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.846 04:02:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.846 04:02:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.846 [2024-11-18 04:02:21.191724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.846 04:02:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.846 04:02:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:24.846 [2024-11-18 04:02:21.246958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:24.846 [2024-11-18 04:02:21.248927] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:24.846 [2024-11-18 04:02:21.379360] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:25.107 [2024-11-18 04:02:21.493112] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:25.107 [2024-11-18 04:02:21.493512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:25.367 [2024-11-18 04:02:21.824844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:25.367 [2024-11-18 04:02:21.825435] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:25.627 183.00 IOPS, 549.00 MiB/s [2024-11-18T04:02:22.268Z] [2024-11-18 04:02:22.048993] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:25.627 [2024-11-18 04:02:22.049351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:25.627 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.627 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.627 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.627 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.627 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.627 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.627 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.627 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.627 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.627 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.887 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.887 "name": "raid_bdev1", 00:13:25.887 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:25.887 "strip_size_kb": 0, 00:13:25.887 "state": "online", 00:13:25.887 "raid_level": "raid1", 00:13:25.887 "superblock": true, 00:13:25.887 "num_base_bdevs": 2, 00:13:25.887 "num_base_bdevs_discovered": 2, 00:13:25.887 "num_base_bdevs_operational": 2, 00:13:25.887 "process": { 00:13:25.887 "type": "rebuild", 00:13:25.887 "target": "spare", 00:13:25.887 "progress": { 00:13:25.887 "blocks": 10240, 00:13:25.887 "percent": 16 00:13:25.887 } 00:13:25.888 }, 00:13:25.888 "base_bdevs_list": [ 00:13:25.888 { 00:13:25.888 "name": "spare", 00:13:25.888 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:25.888 "is_configured": true, 00:13:25.888 "data_offset": 2048, 00:13:25.888 "data_size": 63488 00:13:25.888 }, 00:13:25.888 { 00:13:25.888 "name": "BaseBdev2", 00:13:25.888 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:25.888 "is_configured": true, 00:13:25.888 "data_offset": 2048, 00:13:25.888 "data_size": 63488 00:13:25.888 } 00:13:25.888 ] 00:13:25.888 }' 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.888 [2024-11-18 04:02:22.357906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.888 [2024-11-18 04:02:22.401816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:25.888 [2024-11-18 04:02:22.429763] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:25.888 [2024-11-18 04:02:22.432155] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.888 [2024-11-18 04:02:22.432229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:25.888 [2024-11-18 04:02:22.432260] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:25.888 [2024-11-18 04:02:22.481569] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.888 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.148 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.148 "name": "raid_bdev1", 00:13:26.148 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:26.148 "strip_size_kb": 0, 00:13:26.148 "state": "online", 00:13:26.148 "raid_level": "raid1", 00:13:26.148 "superblock": true, 00:13:26.148 "num_base_bdevs": 2, 00:13:26.148 "num_base_bdevs_discovered": 1, 00:13:26.148 "num_base_bdevs_operational": 1, 00:13:26.148 "base_bdevs_list": [ 00:13:26.148 { 00:13:26.148 "name": null, 00:13:26.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.148 "is_configured": false, 00:13:26.148 "data_offset": 0, 00:13:26.148 "data_size": 63488 00:13:26.148 }, 00:13:26.148 { 00:13:26.148 "name": "BaseBdev2", 00:13:26.148 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:26.148 "is_configured": true, 00:13:26.148 "data_offset": 2048, 00:13:26.148 "data_size": 63488 00:13:26.148 } 00:13:26.148 ] 00:13:26.148 }' 00:13:26.149 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.149 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.409 169.00 IOPS, 507.00 MiB/s [2024-11-18T04:02:23.050Z] 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:26.409 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.409 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:26.409 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:26.409 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.409 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.409 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.409 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.409 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.409 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.409 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.409 "name": "raid_bdev1", 00:13:26.409 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:26.409 "strip_size_kb": 0, 00:13:26.409 "state": "online", 00:13:26.409 "raid_level": "raid1", 00:13:26.409 "superblock": true, 00:13:26.409 "num_base_bdevs": 2, 00:13:26.409 "num_base_bdevs_discovered": 1, 00:13:26.409 "num_base_bdevs_operational": 1, 00:13:26.409 "base_bdevs_list": [ 00:13:26.409 { 00:13:26.409 "name": null, 00:13:26.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.409 "is_configured": false, 00:13:26.409 "data_offset": 0, 00:13:26.409 "data_size": 63488 00:13:26.409 }, 00:13:26.409 { 00:13:26.409 "name": "BaseBdev2", 00:13:26.409 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:26.409 "is_configured": true, 00:13:26.409 "data_offset": 2048, 00:13:26.409 "data_size": 63488 00:13:26.409 } 00:13:26.409 ] 00:13:26.409 }' 00:13:26.409 04:02:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.409 04:02:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:26.409 04:02:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.668 04:02:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:26.668 04:02:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:26.668 04:02:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.668 04:02:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.668 [2024-11-18 04:02:23.076928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.669 04:02:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.669 04:02:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:26.669 [2024-11-18 04:02:23.124516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:26.669 [2024-11-18 04:02:23.126401] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:26.669 [2024-11-18 04:02:23.233833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:26.669 [2024-11-18 04:02:23.234324] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:26.928 [2024-11-18 04:02:23.442689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:26.928 [2024-11-18 04:02:23.443094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:27.497 156.67 IOPS, 470.00 MiB/s [2024-11-18T04:02:24.138Z] [2024-11-18 04:02:23.889090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:27.497 [2024-11-18 04:02:24.107829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:27.497 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.497 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.497 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.497 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.497 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.497 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.497 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.497 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.497 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.757 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.757 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.757 "name": "raid_bdev1", 00:13:27.757 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:27.757 "strip_size_kb": 0, 00:13:27.757 "state": "online", 00:13:27.757 "raid_level": "raid1", 00:13:27.757 "superblock": true, 00:13:27.757 "num_base_bdevs": 2, 00:13:27.757 "num_base_bdevs_discovered": 2, 00:13:27.757 "num_base_bdevs_operational": 2, 00:13:27.757 "process": { 00:13:27.757 "type": "rebuild", 00:13:27.757 "target": "spare", 00:13:27.757 "progress": { 00:13:27.757 "blocks": 14336, 00:13:27.757 "percent": 22 00:13:27.757 } 00:13:27.757 }, 00:13:27.757 "base_bdevs_list": [ 00:13:27.757 { 00:13:27.757 "name": "spare", 00:13:27.757 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:27.757 "is_configured": true, 00:13:27.757 "data_offset": 2048, 00:13:27.757 "data_size": 63488 00:13:27.757 }, 00:13:27.757 { 00:13:27.757 "name": "BaseBdev2", 00:13:27.757 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:27.757 "is_configured": true, 00:13:27.757 "data_offset": 2048, 00:13:27.757 "data_size": 63488 00:13:27.757 } 00:13:27.757 ] 00:13:27.757 }' 00:13:27.757 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.757 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.757 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.757 [2024-11-18 04:02:24.213865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:27.757 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:27.758 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=418 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.758 "name": "raid_bdev1", 00:13:27.758 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:27.758 "strip_size_kb": 0, 00:13:27.758 "state": "online", 00:13:27.758 "raid_level": "raid1", 00:13:27.758 "superblock": true, 00:13:27.758 "num_base_bdevs": 2, 00:13:27.758 "num_base_bdevs_discovered": 2, 00:13:27.758 "num_base_bdevs_operational": 2, 00:13:27.758 "process": { 00:13:27.758 "type": "rebuild", 00:13:27.758 "target": "spare", 00:13:27.758 "progress": { 00:13:27.758 "blocks": 16384, 00:13:27.758 "percent": 25 00:13:27.758 } 00:13:27.758 }, 00:13:27.758 "base_bdevs_list": [ 00:13:27.758 { 00:13:27.758 "name": "spare", 00:13:27.758 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:27.758 "is_configured": true, 00:13:27.758 "data_offset": 2048, 00:13:27.758 "data_size": 63488 00:13:27.758 }, 00:13:27.758 { 00:13:27.758 "name": "BaseBdev2", 00:13:27.758 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:27.758 "is_configured": true, 00:13:27.758 "data_offset": 2048, 00:13:27.758 "data_size": 63488 00:13:27.758 } 00:13:27.758 ] 00:13:27.758 }' 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.758 04:02:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:28.018 [2024-11-18 04:02:24.538210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:28.536 132.25 IOPS, 396.75 MiB/s [2024-11-18T04:02:25.177Z] [2024-11-18 04:02:25.010951] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:28.536 [2024-11-18 04:02:25.011535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:28.795 [2024-11-18 04:02:25.237743] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.795 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.795 "name": "raid_bdev1", 00:13:28.795 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:28.795 "strip_size_kb": 0, 00:13:28.795 "state": "online", 00:13:28.795 "raid_level": "raid1", 00:13:28.795 "superblock": true, 00:13:28.795 "num_base_bdevs": 2, 00:13:28.796 "num_base_bdevs_discovered": 2, 00:13:28.796 "num_base_bdevs_operational": 2, 00:13:28.796 "process": { 00:13:28.796 "type": "rebuild", 00:13:28.796 "target": "spare", 00:13:28.796 "progress": { 00:13:28.796 "blocks": 30720, 00:13:28.796 "percent": 48 00:13:28.796 } 00:13:28.796 }, 00:13:28.796 "base_bdevs_list": [ 00:13:28.796 { 00:13:28.796 "name": "spare", 00:13:28.796 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:28.796 "is_configured": true, 00:13:28.796 "data_offset": 2048, 00:13:28.796 "data_size": 63488 00:13:28.796 }, 00:13:28.796 { 00:13:28.796 "name": "BaseBdev2", 00:13:28.796 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:28.796 "is_configured": true, 00:13:28.796 "data_offset": 2048, 00:13:28.796 "data_size": 63488 00:13:28.796 } 00:13:28.796 ] 00:13:28.796 }' 00:13:28.796 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.055 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.055 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.055 [2024-11-18 04:02:25.459129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:29.055 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.055 04:02:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:29.055 [2024-11-18 04:02:25.568067] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:29.314 [2024-11-18 04:02:25.799075] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:29.314 116.40 IOPS, 349.20 MiB/s [2024-11-18T04:02:25.955Z] [2024-11-18 04:02:25.906125] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:29.885 [2024-11-18 04:02:26.333831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:29.885 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.885 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.885 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.885 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.885 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.885 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.885 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.885 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.885 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.885 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.885 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.144 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.144 "name": "raid_bdev1", 00:13:30.144 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:30.144 "strip_size_kb": 0, 00:13:30.144 "state": "online", 00:13:30.144 "raid_level": "raid1", 00:13:30.144 "superblock": true, 00:13:30.144 "num_base_bdevs": 2, 00:13:30.144 "num_base_bdevs_discovered": 2, 00:13:30.144 "num_base_bdevs_operational": 2, 00:13:30.144 "process": { 00:13:30.144 "type": "rebuild", 00:13:30.144 "target": "spare", 00:13:30.144 "progress": { 00:13:30.144 "blocks": 47104, 00:13:30.144 "percent": 74 00:13:30.144 } 00:13:30.144 }, 00:13:30.144 "base_bdevs_list": [ 00:13:30.144 { 00:13:30.144 "name": "spare", 00:13:30.144 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:30.144 "is_configured": true, 00:13:30.144 "data_offset": 2048, 00:13:30.145 "data_size": 63488 00:13:30.145 }, 00:13:30.145 { 00:13:30.145 "name": "BaseBdev2", 00:13:30.145 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:30.145 "is_configured": true, 00:13:30.145 "data_offset": 2048, 00:13:30.145 "data_size": 63488 00:13:30.145 } 00:13:30.145 ] 00:13:30.145 }' 00:13:30.145 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.145 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.145 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.145 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.145 04:02:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:30.145 [2024-11-18 04:02:26.683045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:30.404 105.83 IOPS, 317.50 MiB/s [2024-11-18T04:02:27.045Z] [2024-11-18 04:02:27.011553] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:30.404 [2024-11-18 04:02:27.011959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:30.729 [2024-11-18 04:02:27.339271] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:30.996 [2024-11-18 04:02:27.444248] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:30.996 [2024-11-18 04:02:27.446209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.996 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.996 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.996 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.996 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.996 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.996 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.996 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.996 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.996 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.996 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.996 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.256 "name": "raid_bdev1", 00:13:31.256 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:31.256 "strip_size_kb": 0, 00:13:31.256 "state": "online", 00:13:31.256 "raid_level": "raid1", 00:13:31.256 "superblock": true, 00:13:31.256 "num_base_bdevs": 2, 00:13:31.256 "num_base_bdevs_discovered": 2, 00:13:31.256 "num_base_bdevs_operational": 2, 00:13:31.256 "base_bdevs_list": [ 00:13:31.256 { 00:13:31.256 "name": "spare", 00:13:31.256 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:31.256 "is_configured": true, 00:13:31.256 "data_offset": 2048, 00:13:31.256 "data_size": 63488 00:13:31.256 }, 00:13:31.256 { 00:13:31.256 "name": "BaseBdev2", 00:13:31.256 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:31.256 "is_configured": true, 00:13:31.256 "data_offset": 2048, 00:13:31.256 "data_size": 63488 00:13:31.256 } 00:13:31.256 ] 00:13:31.256 }' 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.256 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.256 "name": "raid_bdev1", 00:13:31.256 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:31.256 "strip_size_kb": 0, 00:13:31.256 "state": "online", 00:13:31.256 "raid_level": "raid1", 00:13:31.256 "superblock": true, 00:13:31.256 "num_base_bdevs": 2, 00:13:31.256 "num_base_bdevs_discovered": 2, 00:13:31.256 "num_base_bdevs_operational": 2, 00:13:31.256 "base_bdevs_list": [ 00:13:31.256 { 00:13:31.256 "name": "spare", 00:13:31.256 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:31.256 "is_configured": true, 00:13:31.256 "data_offset": 2048, 00:13:31.256 "data_size": 63488 00:13:31.256 }, 00:13:31.256 { 00:13:31.256 "name": "BaseBdev2", 00:13:31.256 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:31.257 "is_configured": true, 00:13:31.257 "data_offset": 2048, 00:13:31.257 "data_size": 63488 00:13:31.257 } 00:13:31.257 ] 00:13:31.257 }' 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.257 96.43 IOPS, 289.29 MiB/s [2024-11-18T04:02:27.898Z] 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.257 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.516 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.516 "name": "raid_bdev1", 00:13:31.516 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:31.516 "strip_size_kb": 0, 00:13:31.516 "state": "online", 00:13:31.516 "raid_level": "raid1", 00:13:31.516 "superblock": true, 00:13:31.516 "num_base_bdevs": 2, 00:13:31.516 "num_base_bdevs_discovered": 2, 00:13:31.516 "num_base_bdevs_operational": 2, 00:13:31.516 "base_bdevs_list": [ 00:13:31.516 { 00:13:31.516 "name": "spare", 00:13:31.516 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:31.516 "is_configured": true, 00:13:31.516 "data_offset": 2048, 00:13:31.516 "data_size": 63488 00:13:31.516 }, 00:13:31.517 { 00:13:31.517 "name": "BaseBdev2", 00:13:31.517 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:31.517 "is_configured": true, 00:13:31.517 "data_offset": 2048, 00:13:31.517 "data_size": 63488 00:13:31.517 } 00:13:31.517 ] 00:13:31.517 }' 00:13:31.517 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.517 04:02:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.776 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:31.776 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.776 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.776 [2024-11-18 04:02:28.293413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.776 [2024-11-18 04:02:28.293493] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.776 00:13:31.776 Latency(us) 00:13:31.776 [2024-11-18T04:02:28.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.776 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:31.776 raid_bdev1 : 7.47 93.01 279.02 0.00 0.00 14591.03 295.13 114931.26 00:13:31.776 [2024-11-18T04:02:28.417Z] =================================================================================================================== 00:13:31.776 [2024-11-18T04:02:28.417Z] Total : 93.01 279.02 0.00 0.00 14591.03 295.13 114931.26 00:13:31.776 [2024-11-18 04:02:28.325143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.776 [2024-11-18 04:02:28.325238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.776 [2024-11-18 04:02:28.325330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.776 [2024-11-18 04:02:28.325409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:31.776 { 00:13:31.776 "results": [ 00:13:31.777 { 00:13:31.777 "job": "raid_bdev1", 00:13:31.777 "core_mask": "0x1", 00:13:31.777 "workload": "randrw", 00:13:31.777 "percentage": 50, 00:13:31.777 "status": "finished", 00:13:31.777 "queue_depth": 2, 00:13:31.777 "io_size": 3145728, 00:13:31.777 "runtime": 7.472568, 00:13:31.777 "iops": 93.00684851579805, 00:13:31.777 "mibps": 279.02054554739414, 00:13:31.777 "io_failed": 0, 00:13:31.777 "io_timeout": 0, 00:13:31.777 "avg_latency_us": 14591.033937984985, 00:13:31.777 "min_latency_us": 295.12663755458516, 00:13:31.777 "max_latency_us": 114931.2558951965 00:13:31.777 } 00:13:31.777 ], 00:13:31.777 "core_count": 1 00:13:31.777 } 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.777 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:32.036 /dev/nbd0 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.036 1+0 records in 00:13:32.036 1+0 records out 00:13:32.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492075 s, 8.3 MB/s 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.036 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:32.296 /dev/nbd1 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.296 1+0 records in 00:13:32.296 1+0 records out 00:13:32.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199927 s, 20.5 MB/s 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:32.296 04:02:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:32.557 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:32.557 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.557 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:32.557 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.557 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:32.557 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.557 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.817 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.077 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.077 [2024-11-18 04:02:29.525471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:33.077 [2024-11-18 04:02:29.525519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.077 [2024-11-18 04:02:29.525538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:33.077 [2024-11-18 04:02:29.525549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.077 [2024-11-18 04:02:29.528070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.078 [2024-11-18 04:02:29.528212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:33.078 [2024-11-18 04:02:29.528371] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:33.078 [2024-11-18 04:02:29.528424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.078 [2024-11-18 04:02:29.528551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.078 spare 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.078 [2024-11-18 04:02:29.628447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:33.078 [2024-11-18 04:02:29.628468] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:33.078 [2024-11-18 04:02:29.628711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:33.078 [2024-11-18 04:02:29.628868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:33.078 [2024-11-18 04:02:29.628880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:33.078 [2024-11-18 04:02:29.629064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.078 "name": "raid_bdev1", 00:13:33.078 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:33.078 "strip_size_kb": 0, 00:13:33.078 "state": "online", 00:13:33.078 "raid_level": "raid1", 00:13:33.078 "superblock": true, 00:13:33.078 "num_base_bdevs": 2, 00:13:33.078 "num_base_bdevs_discovered": 2, 00:13:33.078 "num_base_bdevs_operational": 2, 00:13:33.078 "base_bdevs_list": [ 00:13:33.078 { 00:13:33.078 "name": "spare", 00:13:33.078 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:33.078 "is_configured": true, 00:13:33.078 "data_offset": 2048, 00:13:33.078 "data_size": 63488 00:13:33.078 }, 00:13:33.078 { 00:13:33.078 "name": "BaseBdev2", 00:13:33.078 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:33.078 "is_configured": true, 00:13:33.078 "data_offset": 2048, 00:13:33.078 "data_size": 63488 00:13:33.078 } 00:13:33.078 ] 00:13:33.078 }' 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.078 04:02:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.648 "name": "raid_bdev1", 00:13:33.648 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:33.648 "strip_size_kb": 0, 00:13:33.648 "state": "online", 00:13:33.648 "raid_level": "raid1", 00:13:33.648 "superblock": true, 00:13:33.648 "num_base_bdevs": 2, 00:13:33.648 "num_base_bdevs_discovered": 2, 00:13:33.648 "num_base_bdevs_operational": 2, 00:13:33.648 "base_bdevs_list": [ 00:13:33.648 { 00:13:33.648 "name": "spare", 00:13:33.648 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:33.648 "is_configured": true, 00:13:33.648 "data_offset": 2048, 00:13:33.648 "data_size": 63488 00:13:33.648 }, 00:13:33.648 { 00:13:33.648 "name": "BaseBdev2", 00:13:33.648 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:33.648 "is_configured": true, 00:13:33.648 "data_offset": 2048, 00:13:33.648 "data_size": 63488 00:13:33.648 } 00:13:33.648 ] 00:13:33.648 }' 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.648 [2024-11-18 04:02:30.252326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.648 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.908 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.908 "name": "raid_bdev1", 00:13:33.908 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:33.908 "strip_size_kb": 0, 00:13:33.908 "state": "online", 00:13:33.908 "raid_level": "raid1", 00:13:33.908 "superblock": true, 00:13:33.908 "num_base_bdevs": 2, 00:13:33.908 "num_base_bdevs_discovered": 1, 00:13:33.908 "num_base_bdevs_operational": 1, 00:13:33.908 "base_bdevs_list": [ 00:13:33.908 { 00:13:33.908 "name": null, 00:13:33.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.908 "is_configured": false, 00:13:33.908 "data_offset": 0, 00:13:33.908 "data_size": 63488 00:13:33.908 }, 00:13:33.908 { 00:13:33.908 "name": "BaseBdev2", 00:13:33.908 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:33.908 "is_configured": true, 00:13:33.908 "data_offset": 2048, 00:13:33.908 "data_size": 63488 00:13:33.908 } 00:13:33.908 ] 00:13:33.908 }' 00:13:33.908 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.908 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.168 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:34.168 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.168 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.168 [2024-11-18 04:02:30.747607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.168 [2024-11-18 04:02:30.747873] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:34.168 [2024-11-18 04:02:30.747936] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:34.168 [2024-11-18 04:02:30.748299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.168 [2024-11-18 04:02:30.764189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:34.168 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.168 04:02:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:34.168 [2024-11-18 04:02:30.766001] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.549 "name": "raid_bdev1", 00:13:35.549 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:35.549 "strip_size_kb": 0, 00:13:35.549 "state": "online", 00:13:35.549 "raid_level": "raid1", 00:13:35.549 "superblock": true, 00:13:35.549 "num_base_bdevs": 2, 00:13:35.549 "num_base_bdevs_discovered": 2, 00:13:35.549 "num_base_bdevs_operational": 2, 00:13:35.549 "process": { 00:13:35.549 "type": "rebuild", 00:13:35.549 "target": "spare", 00:13:35.549 "progress": { 00:13:35.549 "blocks": 20480, 00:13:35.549 "percent": 32 00:13:35.549 } 00:13:35.549 }, 00:13:35.549 "base_bdevs_list": [ 00:13:35.549 { 00:13:35.549 "name": "spare", 00:13:35.549 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:35.549 "is_configured": true, 00:13:35.549 "data_offset": 2048, 00:13:35.549 "data_size": 63488 00:13:35.549 }, 00:13:35.549 { 00:13:35.549 "name": "BaseBdev2", 00:13:35.549 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:35.549 "is_configured": true, 00:13:35.549 "data_offset": 2048, 00:13:35.549 "data_size": 63488 00:13:35.549 } 00:13:35.549 ] 00:13:35.549 }' 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.549 04:02:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.549 [2024-11-18 04:02:31.921341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.549 [2024-11-18 04:02:31.970661] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.549 [2024-11-18 04:02:31.971113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.549 [2024-11-18 04:02:31.971139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.549 [2024-11-18 04:02:31.971149] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.549 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.549 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.549 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.550 "name": "raid_bdev1", 00:13:35.550 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:35.550 "strip_size_kb": 0, 00:13:35.550 "state": "online", 00:13:35.550 "raid_level": "raid1", 00:13:35.550 "superblock": true, 00:13:35.550 "num_base_bdevs": 2, 00:13:35.550 "num_base_bdevs_discovered": 1, 00:13:35.550 "num_base_bdevs_operational": 1, 00:13:35.550 "base_bdevs_list": [ 00:13:35.550 { 00:13:35.550 "name": null, 00:13:35.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.550 "is_configured": false, 00:13:35.550 "data_offset": 0, 00:13:35.550 "data_size": 63488 00:13:35.550 }, 00:13:35.550 { 00:13:35.550 "name": "BaseBdev2", 00:13:35.550 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:35.550 "is_configured": true, 00:13:35.550 "data_offset": 2048, 00:13:35.550 "data_size": 63488 00:13:35.550 } 00:13:35.550 ] 00:13:35.550 }' 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.550 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.120 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:36.120 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.120 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.120 [2024-11-18 04:02:32.470412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:36.120 [2024-11-18 04:02:32.470698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.120 [2024-11-18 04:02:32.470842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:36.120 [2024-11-18 04:02:32.470947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.120 [2024-11-18 04:02:32.471463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.120 [2024-11-18 04:02:32.471671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:36.120 [2024-11-18 04:02:32.471882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:36.120 [2024-11-18 04:02:32.471935] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:36.120 [2024-11-18 04:02:32.472002] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:36.120 [2024-11-18 04:02:32.472121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.120 [2024-11-18 04:02:32.488418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:36.120 spare 00:13:36.120 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.120 04:02:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:36.120 [2024-11-18 04:02:32.490313] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.134 "name": "raid_bdev1", 00:13:37.134 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:37.134 "strip_size_kb": 0, 00:13:37.134 "state": "online", 00:13:37.134 "raid_level": "raid1", 00:13:37.134 "superblock": true, 00:13:37.134 "num_base_bdevs": 2, 00:13:37.134 "num_base_bdevs_discovered": 2, 00:13:37.134 "num_base_bdevs_operational": 2, 00:13:37.134 "process": { 00:13:37.134 "type": "rebuild", 00:13:37.134 "target": "spare", 00:13:37.134 "progress": { 00:13:37.134 "blocks": 20480, 00:13:37.134 "percent": 32 00:13:37.134 } 00:13:37.134 }, 00:13:37.134 "base_bdevs_list": [ 00:13:37.134 { 00:13:37.134 "name": "spare", 00:13:37.134 "uuid": "0e9b5afd-646b-58a4-8f07-9d1f6ecb3810", 00:13:37.134 "is_configured": true, 00:13:37.134 "data_offset": 2048, 00:13:37.134 "data_size": 63488 00:13:37.134 }, 00:13:37.134 { 00:13:37.134 "name": "BaseBdev2", 00:13:37.134 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:37.134 "is_configured": true, 00:13:37.134 "data_offset": 2048, 00:13:37.134 "data_size": 63488 00:13:37.134 } 00:13:37.134 ] 00:13:37.134 }' 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.134 [2024-11-18 04:02:33.657922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.134 [2024-11-18 04:02:33.695222] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.134 [2024-11-18 04:02:33.695723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.134 [2024-11-18 04:02:33.695745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.134 [2024-11-18 04:02:33.695756] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.134 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.394 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.394 "name": "raid_bdev1", 00:13:37.394 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:37.394 "strip_size_kb": 0, 00:13:37.394 "state": "online", 00:13:37.394 "raid_level": "raid1", 00:13:37.394 "superblock": true, 00:13:37.394 "num_base_bdevs": 2, 00:13:37.394 "num_base_bdevs_discovered": 1, 00:13:37.394 "num_base_bdevs_operational": 1, 00:13:37.394 "base_bdevs_list": [ 00:13:37.394 { 00:13:37.394 "name": null, 00:13:37.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.394 "is_configured": false, 00:13:37.394 "data_offset": 0, 00:13:37.394 "data_size": 63488 00:13:37.394 }, 00:13:37.394 { 00:13:37.394 "name": "BaseBdev2", 00:13:37.394 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:37.394 "is_configured": true, 00:13:37.394 "data_offset": 2048, 00:13:37.394 "data_size": 63488 00:13:37.394 } 00:13:37.394 ] 00:13:37.394 }' 00:13:37.394 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.394 04:02:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.654 "name": "raid_bdev1", 00:13:37.654 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:37.654 "strip_size_kb": 0, 00:13:37.654 "state": "online", 00:13:37.654 "raid_level": "raid1", 00:13:37.654 "superblock": true, 00:13:37.654 "num_base_bdevs": 2, 00:13:37.654 "num_base_bdevs_discovered": 1, 00:13:37.654 "num_base_bdevs_operational": 1, 00:13:37.654 "base_bdevs_list": [ 00:13:37.654 { 00:13:37.654 "name": null, 00:13:37.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.654 "is_configured": false, 00:13:37.654 "data_offset": 0, 00:13:37.654 "data_size": 63488 00:13:37.654 }, 00:13:37.654 { 00:13:37.654 "name": "BaseBdev2", 00:13:37.654 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:37.654 "is_configured": true, 00:13:37.654 "data_offset": 2048, 00:13:37.654 "data_size": 63488 00:13:37.654 } 00:13:37.654 ] 00:13:37.654 }' 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.654 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.914 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.914 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:37.914 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.914 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.914 [2024-11-18 04:02:34.310574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:37.914 [2024-11-18 04:02:34.310635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.914 [2024-11-18 04:02:34.310655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:37.914 [2024-11-18 04:02:34.310665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.914 [2024-11-18 04:02:34.311112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.914 [2024-11-18 04:02:34.311134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:37.914 [2024-11-18 04:02:34.311213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:37.914 [2024-11-18 04:02:34.311229] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:37.914 [2024-11-18 04:02:34.311236] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:37.914 [2024-11-18 04:02:34.311249] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:37.914 BaseBdev1 00:13:37.914 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.914 04:02:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.853 "name": "raid_bdev1", 00:13:38.853 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:38.853 "strip_size_kb": 0, 00:13:38.853 "state": "online", 00:13:38.853 "raid_level": "raid1", 00:13:38.853 "superblock": true, 00:13:38.853 "num_base_bdevs": 2, 00:13:38.853 "num_base_bdevs_discovered": 1, 00:13:38.853 "num_base_bdevs_operational": 1, 00:13:38.853 "base_bdevs_list": [ 00:13:38.853 { 00:13:38.853 "name": null, 00:13:38.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.853 "is_configured": false, 00:13:38.853 "data_offset": 0, 00:13:38.853 "data_size": 63488 00:13:38.853 }, 00:13:38.853 { 00:13:38.853 "name": "BaseBdev2", 00:13:38.853 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:38.853 "is_configured": true, 00:13:38.853 "data_offset": 2048, 00:13:38.853 "data_size": 63488 00:13:38.853 } 00:13:38.853 ] 00:13:38.853 }' 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.853 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.423 "name": "raid_bdev1", 00:13:39.423 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:39.423 "strip_size_kb": 0, 00:13:39.423 "state": "online", 00:13:39.423 "raid_level": "raid1", 00:13:39.423 "superblock": true, 00:13:39.423 "num_base_bdevs": 2, 00:13:39.423 "num_base_bdevs_discovered": 1, 00:13:39.423 "num_base_bdevs_operational": 1, 00:13:39.423 "base_bdevs_list": [ 00:13:39.423 { 00:13:39.423 "name": null, 00:13:39.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.423 "is_configured": false, 00:13:39.423 "data_offset": 0, 00:13:39.423 "data_size": 63488 00:13:39.423 }, 00:13:39.423 { 00:13:39.423 "name": "BaseBdev2", 00:13:39.423 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:39.423 "is_configured": true, 00:13:39.423 "data_offset": 2048, 00:13:39.423 "data_size": 63488 00:13:39.423 } 00:13:39.423 ] 00:13:39.423 }' 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.423 [2024-11-18 04:02:35.940063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.423 [2024-11-18 04:02:35.940307] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:39.423 [2024-11-18 04:02:35.940381] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:39.423 request: 00:13:39.423 { 00:13:39.423 "base_bdev": "BaseBdev1", 00:13:39.423 "raid_bdev": "raid_bdev1", 00:13:39.423 "method": "bdev_raid_add_base_bdev", 00:13:39.423 "req_id": 1 00:13:39.423 } 00:13:39.423 Got JSON-RPC error response 00:13:39.423 response: 00:13:39.423 { 00:13:39.423 "code": -22, 00:13:39.423 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:39.423 } 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:39.423 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.424 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.424 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.424 04:02:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.364 04:02:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.624 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.624 "name": "raid_bdev1", 00:13:40.624 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:40.624 "strip_size_kb": 0, 00:13:40.624 "state": "online", 00:13:40.624 "raid_level": "raid1", 00:13:40.624 "superblock": true, 00:13:40.624 "num_base_bdevs": 2, 00:13:40.624 "num_base_bdevs_discovered": 1, 00:13:40.624 "num_base_bdevs_operational": 1, 00:13:40.624 "base_bdevs_list": [ 00:13:40.624 { 00:13:40.624 "name": null, 00:13:40.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.624 "is_configured": false, 00:13:40.624 "data_offset": 0, 00:13:40.624 "data_size": 63488 00:13:40.624 }, 00:13:40.624 { 00:13:40.624 "name": "BaseBdev2", 00:13:40.624 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:40.624 "is_configured": true, 00:13:40.624 "data_offset": 2048, 00:13:40.624 "data_size": 63488 00:13:40.624 } 00:13:40.624 ] 00:13:40.624 }' 00:13:40.624 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.624 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.884 "name": "raid_bdev1", 00:13:40.884 "uuid": "adcb068b-a6bb-41a3-9482-f0ee2896e3b5", 00:13:40.884 "strip_size_kb": 0, 00:13:40.884 "state": "online", 00:13:40.884 "raid_level": "raid1", 00:13:40.884 "superblock": true, 00:13:40.884 "num_base_bdevs": 2, 00:13:40.884 "num_base_bdevs_discovered": 1, 00:13:40.884 "num_base_bdevs_operational": 1, 00:13:40.884 "base_bdevs_list": [ 00:13:40.884 { 00:13:40.884 "name": null, 00:13:40.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.884 "is_configured": false, 00:13:40.884 "data_offset": 0, 00:13:40.884 "data_size": 63488 00:13:40.884 }, 00:13:40.884 { 00:13:40.884 "name": "BaseBdev2", 00:13:40.884 "uuid": "c1d00068-0a6f-5e97-8834-71914d3f49a7", 00:13:40.884 "is_configured": true, 00:13:40.884 "data_offset": 2048, 00:13:40.884 "data_size": 63488 00:13:40.884 } 00:13:40.884 ] 00:13:40.884 }' 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.884 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76811 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76811 ']' 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76811 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76811 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76811' 00:13:41.145 killing process with pid 76811 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76811 00:13:41.145 Received shutdown signal, test time was about 16.752889 seconds 00:13:41.145 00:13:41.145 Latency(us) 00:13:41.145 [2024-11-18T04:02:37.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.145 [2024-11-18T04:02:37.786Z] =================================================================================================================== 00:13:41.145 [2024-11-18T04:02:37.786Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:41.145 [2024-11-18 04:02:37.567812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.145 04:02:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76811 00:13:41.145 [2024-11-18 04:02:37.567944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.145 [2024-11-18 04:02:37.568001] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.145 [2024-11-18 04:02:37.568009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:41.145 [2024-11-18 04:02:37.778936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.528 ************************************ 00:13:42.528 END TEST raid_rebuild_test_sb_io 00:13:42.528 ************************************ 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:42.528 00:13:42.528 real 0m19.793s 00:13:42.528 user 0m25.923s 00:13:42.528 sys 0m2.149s 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.528 04:02:38 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:42.528 04:02:38 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:42.528 04:02:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:42.528 04:02:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.528 04:02:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.528 ************************************ 00:13:42.528 START TEST raid_rebuild_test 00:13:42.528 ************************************ 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77495 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77495 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77495 ']' 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.528 04:02:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.528 [2024-11-18 04:02:39.033199] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:42.528 [2024-11-18 04:02:39.033393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:42.528 Zero copy mechanism will not be used. 00:13:42.528 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77495 ] 00:13:42.787 [2024-11-18 04:02:39.208046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.787 [2024-11-18 04:02:39.317905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.047 [2024-11-18 04:02:39.510536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.047 [2024-11-18 04:02:39.510659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.308 BaseBdev1_malloc 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.308 [2024-11-18 04:02:39.878296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:43.308 [2024-11-18 04:02:39.878365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.308 [2024-11-18 04:02:39.878389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:43.308 [2024-11-18 04:02:39.878399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.308 [2024-11-18 04:02:39.880402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.308 [2024-11-18 04:02:39.880440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:43.308 BaseBdev1 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.308 BaseBdev2_malloc 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.308 [2024-11-18 04:02:39.927156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:43.308 [2024-11-18 04:02:39.927233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.308 [2024-11-18 04:02:39.927251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:43.308 [2024-11-18 04:02:39.927261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.308 [2024-11-18 04:02:39.929196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.308 [2024-11-18 04:02:39.929290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:43.308 BaseBdev2 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.308 04:02:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.568 BaseBdev3_malloc 00:13:43.568 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.568 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:43.568 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.568 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.569 [2024-11-18 04:02:40.014391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:43.569 [2024-11-18 04:02:40.014494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.569 [2024-11-18 04:02:40.014519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:43.569 [2024-11-18 04:02:40.014529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.569 [2024-11-18 04:02:40.016511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.569 [2024-11-18 04:02:40.016552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:43.569 BaseBdev3 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.569 BaseBdev4_malloc 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.569 [2024-11-18 04:02:40.066882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:43.569 [2024-11-18 04:02:40.066924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.569 [2024-11-18 04:02:40.066940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:43.569 [2024-11-18 04:02:40.066950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.569 [2024-11-18 04:02:40.068882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.569 [2024-11-18 04:02:40.068972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:43.569 BaseBdev4 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.569 spare_malloc 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.569 spare_delay 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.569 [2024-11-18 04:02:40.132090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:43.569 [2024-11-18 04:02:40.132141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.569 [2024-11-18 04:02:40.132158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:43.569 [2024-11-18 04:02:40.132168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.569 [2024-11-18 04:02:40.134093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.569 [2024-11-18 04:02:40.134130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:43.569 spare 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.569 [2024-11-18 04:02:40.144118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.569 [2024-11-18 04:02:40.145769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.569 [2024-11-18 04:02:40.145825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:43.569 [2024-11-18 04:02:40.145883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:43.569 [2024-11-18 04:02:40.145954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:43.569 [2024-11-18 04:02:40.145965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:43.569 [2024-11-18 04:02:40.146175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:43.569 [2024-11-18 04:02:40.146331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:43.569 [2024-11-18 04:02:40.146342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:43.569 [2024-11-18 04:02:40.146470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.569 "name": "raid_bdev1", 00:13:43.569 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:43.569 "strip_size_kb": 0, 00:13:43.569 "state": "online", 00:13:43.569 "raid_level": "raid1", 00:13:43.569 "superblock": false, 00:13:43.569 "num_base_bdevs": 4, 00:13:43.569 "num_base_bdevs_discovered": 4, 00:13:43.569 "num_base_bdevs_operational": 4, 00:13:43.569 "base_bdevs_list": [ 00:13:43.569 { 00:13:43.569 "name": "BaseBdev1", 00:13:43.569 "uuid": "3321c2c3-2cf8-5d09-992f-ec0129afc741", 00:13:43.569 "is_configured": true, 00:13:43.569 "data_offset": 0, 00:13:43.569 "data_size": 65536 00:13:43.569 }, 00:13:43.569 { 00:13:43.569 "name": "BaseBdev2", 00:13:43.569 "uuid": "68fdceb5-7c07-5a85-b0fb-5b7ad05211bb", 00:13:43.569 "is_configured": true, 00:13:43.569 "data_offset": 0, 00:13:43.569 "data_size": 65536 00:13:43.569 }, 00:13:43.569 { 00:13:43.569 "name": "BaseBdev3", 00:13:43.569 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:43.569 "is_configured": true, 00:13:43.569 "data_offset": 0, 00:13:43.569 "data_size": 65536 00:13:43.569 }, 00:13:43.569 { 00:13:43.569 "name": "BaseBdev4", 00:13:43.569 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:43.569 "is_configured": true, 00:13:43.569 "data_offset": 0, 00:13:43.569 "data_size": 65536 00:13:43.569 } 00:13:43.569 ] 00:13:43.569 }' 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.569 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.148 [2024-11-18 04:02:40.571706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.148 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:44.419 [2024-11-18 04:02:40.822996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:44.419 /dev/nbd0 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.419 1+0 records in 00:13:44.419 1+0 records out 00:13:44.419 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344997 s, 11.9 MB/s 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:44.419 04:02:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:49.701 65536+0 records in 00:13:49.701 65536+0 records out 00:13:49.701 33554432 bytes (34 MB, 32 MiB) copied, 4.95033 s, 6.8 MB/s 00:13:49.701 04:02:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:49.701 04:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.701 04:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:49.701 04:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:49.701 04:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:49.701 04:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.701 04:02:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:49.701 [2024-11-18 04:02:46.029323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.701 [2024-11-18 04:02:46.057985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.701 "name": "raid_bdev1", 00:13:49.701 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:49.701 "strip_size_kb": 0, 00:13:49.701 "state": "online", 00:13:49.701 "raid_level": "raid1", 00:13:49.701 "superblock": false, 00:13:49.701 "num_base_bdevs": 4, 00:13:49.701 "num_base_bdevs_discovered": 3, 00:13:49.701 "num_base_bdevs_operational": 3, 00:13:49.701 "base_bdevs_list": [ 00:13:49.701 { 00:13:49.701 "name": null, 00:13:49.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.701 "is_configured": false, 00:13:49.701 "data_offset": 0, 00:13:49.701 "data_size": 65536 00:13:49.701 }, 00:13:49.701 { 00:13:49.701 "name": "BaseBdev2", 00:13:49.701 "uuid": "68fdceb5-7c07-5a85-b0fb-5b7ad05211bb", 00:13:49.701 "is_configured": true, 00:13:49.701 "data_offset": 0, 00:13:49.701 "data_size": 65536 00:13:49.701 }, 00:13:49.701 { 00:13:49.701 "name": "BaseBdev3", 00:13:49.701 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:49.701 "is_configured": true, 00:13:49.701 "data_offset": 0, 00:13:49.701 "data_size": 65536 00:13:49.701 }, 00:13:49.701 { 00:13:49.701 "name": "BaseBdev4", 00:13:49.701 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:49.701 "is_configured": true, 00:13:49.701 "data_offset": 0, 00:13:49.701 "data_size": 65536 00:13:49.701 } 00:13:49.701 ] 00:13:49.701 }' 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.701 04:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.961 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:49.961 04:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.961 04:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.961 [2024-11-18 04:02:46.513197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.961 [2024-11-18 04:02:46.527584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:49.961 04:02:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.961 04:02:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:49.961 [2024-11-18 04:02:46.529395] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:50.900 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.900 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.900 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.900 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.900 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.160 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.160 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.160 04:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.160 04:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.160 04:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.160 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.160 "name": "raid_bdev1", 00:13:51.160 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:51.160 "strip_size_kb": 0, 00:13:51.160 "state": "online", 00:13:51.160 "raid_level": "raid1", 00:13:51.160 "superblock": false, 00:13:51.160 "num_base_bdevs": 4, 00:13:51.160 "num_base_bdevs_discovered": 4, 00:13:51.160 "num_base_bdevs_operational": 4, 00:13:51.160 "process": { 00:13:51.160 "type": "rebuild", 00:13:51.160 "target": "spare", 00:13:51.160 "progress": { 00:13:51.160 "blocks": 20480, 00:13:51.160 "percent": 31 00:13:51.160 } 00:13:51.160 }, 00:13:51.160 "base_bdevs_list": [ 00:13:51.160 { 00:13:51.160 "name": "spare", 00:13:51.160 "uuid": "298c8f75-efdc-5ae8-bbd9-f2d78cfe1ea2", 00:13:51.160 "is_configured": true, 00:13:51.160 "data_offset": 0, 00:13:51.161 "data_size": 65536 00:13:51.161 }, 00:13:51.161 { 00:13:51.161 "name": "BaseBdev2", 00:13:51.161 "uuid": "68fdceb5-7c07-5a85-b0fb-5b7ad05211bb", 00:13:51.161 "is_configured": true, 00:13:51.161 "data_offset": 0, 00:13:51.161 "data_size": 65536 00:13:51.161 }, 00:13:51.161 { 00:13:51.161 "name": "BaseBdev3", 00:13:51.161 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:51.161 "is_configured": true, 00:13:51.161 "data_offset": 0, 00:13:51.161 "data_size": 65536 00:13:51.161 }, 00:13:51.161 { 00:13:51.161 "name": "BaseBdev4", 00:13:51.161 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:51.161 "is_configured": true, 00:13:51.161 "data_offset": 0, 00:13:51.161 "data_size": 65536 00:13:51.161 } 00:13:51.161 ] 00:13:51.161 }' 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.161 [2024-11-18 04:02:47.692467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:51.161 [2024-11-18 04:02:47.733899] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:51.161 [2024-11-18 04:02:47.733969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.161 [2024-11-18 04:02:47.733984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:51.161 [2024-11-18 04:02:47.733993] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.161 04:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.421 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.421 "name": "raid_bdev1", 00:13:51.421 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:51.421 "strip_size_kb": 0, 00:13:51.421 "state": "online", 00:13:51.421 "raid_level": "raid1", 00:13:51.421 "superblock": false, 00:13:51.421 "num_base_bdevs": 4, 00:13:51.421 "num_base_bdevs_discovered": 3, 00:13:51.421 "num_base_bdevs_operational": 3, 00:13:51.421 "base_bdevs_list": [ 00:13:51.421 { 00:13:51.421 "name": null, 00:13:51.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.421 "is_configured": false, 00:13:51.421 "data_offset": 0, 00:13:51.422 "data_size": 65536 00:13:51.422 }, 00:13:51.422 { 00:13:51.422 "name": "BaseBdev2", 00:13:51.422 "uuid": "68fdceb5-7c07-5a85-b0fb-5b7ad05211bb", 00:13:51.422 "is_configured": true, 00:13:51.422 "data_offset": 0, 00:13:51.422 "data_size": 65536 00:13:51.422 }, 00:13:51.422 { 00:13:51.422 "name": "BaseBdev3", 00:13:51.422 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:51.422 "is_configured": true, 00:13:51.422 "data_offset": 0, 00:13:51.422 "data_size": 65536 00:13:51.422 }, 00:13:51.422 { 00:13:51.422 "name": "BaseBdev4", 00:13:51.422 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:51.422 "is_configured": true, 00:13:51.422 "data_offset": 0, 00:13:51.422 "data_size": 65536 00:13:51.422 } 00:13:51.422 ] 00:13:51.422 }' 00:13:51.422 04:02:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.422 04:02:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.681 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.681 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.681 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.681 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.681 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.681 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.682 04:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.682 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.682 04:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.682 04:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.682 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.682 "name": "raid_bdev1", 00:13:51.682 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:51.682 "strip_size_kb": 0, 00:13:51.682 "state": "online", 00:13:51.682 "raid_level": "raid1", 00:13:51.682 "superblock": false, 00:13:51.682 "num_base_bdevs": 4, 00:13:51.682 "num_base_bdevs_discovered": 3, 00:13:51.682 "num_base_bdevs_operational": 3, 00:13:51.682 "base_bdevs_list": [ 00:13:51.682 { 00:13:51.682 "name": null, 00:13:51.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.682 "is_configured": false, 00:13:51.682 "data_offset": 0, 00:13:51.682 "data_size": 65536 00:13:51.682 }, 00:13:51.682 { 00:13:51.682 "name": "BaseBdev2", 00:13:51.682 "uuid": "68fdceb5-7c07-5a85-b0fb-5b7ad05211bb", 00:13:51.682 "is_configured": true, 00:13:51.682 "data_offset": 0, 00:13:51.682 "data_size": 65536 00:13:51.682 }, 00:13:51.682 { 00:13:51.682 "name": "BaseBdev3", 00:13:51.682 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:51.682 "is_configured": true, 00:13:51.682 "data_offset": 0, 00:13:51.682 "data_size": 65536 00:13:51.682 }, 00:13:51.682 { 00:13:51.682 "name": "BaseBdev4", 00:13:51.682 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:51.682 "is_configured": true, 00:13:51.682 "data_offset": 0, 00:13:51.682 "data_size": 65536 00:13:51.682 } 00:13:51.682 ] 00:13:51.682 }' 00:13:51.682 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.682 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.682 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.943 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.943 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.943 04:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.943 04:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.943 [2024-11-18 04:02:48.372937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.943 [2024-11-18 04:02:48.386830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:51.943 04:02:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.943 04:02:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:51.943 [2024-11-18 04:02:48.388650] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.883 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.883 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.883 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.883 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.883 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.883 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.883 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.883 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.883 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.883 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.883 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.883 "name": "raid_bdev1", 00:13:52.883 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:52.883 "strip_size_kb": 0, 00:13:52.883 "state": "online", 00:13:52.883 "raid_level": "raid1", 00:13:52.883 "superblock": false, 00:13:52.883 "num_base_bdevs": 4, 00:13:52.883 "num_base_bdevs_discovered": 4, 00:13:52.883 "num_base_bdevs_operational": 4, 00:13:52.883 "process": { 00:13:52.883 "type": "rebuild", 00:13:52.883 "target": "spare", 00:13:52.883 "progress": { 00:13:52.883 "blocks": 20480, 00:13:52.883 "percent": 31 00:13:52.883 } 00:13:52.883 }, 00:13:52.883 "base_bdevs_list": [ 00:13:52.883 { 00:13:52.883 "name": "spare", 00:13:52.883 "uuid": "298c8f75-efdc-5ae8-bbd9-f2d78cfe1ea2", 00:13:52.883 "is_configured": true, 00:13:52.883 "data_offset": 0, 00:13:52.883 "data_size": 65536 00:13:52.883 }, 00:13:52.883 { 00:13:52.883 "name": "BaseBdev2", 00:13:52.883 "uuid": "68fdceb5-7c07-5a85-b0fb-5b7ad05211bb", 00:13:52.883 "is_configured": true, 00:13:52.883 "data_offset": 0, 00:13:52.883 "data_size": 65536 00:13:52.883 }, 00:13:52.883 { 00:13:52.883 "name": "BaseBdev3", 00:13:52.883 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:52.883 "is_configured": true, 00:13:52.883 "data_offset": 0, 00:13:52.883 "data_size": 65536 00:13:52.883 }, 00:13:52.883 { 00:13:52.883 "name": "BaseBdev4", 00:13:52.883 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:52.883 "is_configured": true, 00:13:52.883 "data_offset": 0, 00:13:52.883 "data_size": 65536 00:13:52.883 } 00:13:52.883 ] 00:13:52.884 }' 00:13:52.884 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.884 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.884 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.142 [2024-11-18 04:02:49.544628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.142 [2024-11-18 04:02:49.593022] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.142 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.142 "name": "raid_bdev1", 00:13:53.142 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:53.142 "strip_size_kb": 0, 00:13:53.142 "state": "online", 00:13:53.142 "raid_level": "raid1", 00:13:53.142 "superblock": false, 00:13:53.142 "num_base_bdevs": 4, 00:13:53.142 "num_base_bdevs_discovered": 3, 00:13:53.142 "num_base_bdevs_operational": 3, 00:13:53.142 "process": { 00:13:53.142 "type": "rebuild", 00:13:53.142 "target": "spare", 00:13:53.142 "progress": { 00:13:53.142 "blocks": 24576, 00:13:53.142 "percent": 37 00:13:53.143 } 00:13:53.143 }, 00:13:53.143 "base_bdevs_list": [ 00:13:53.143 { 00:13:53.143 "name": "spare", 00:13:53.143 "uuid": "298c8f75-efdc-5ae8-bbd9-f2d78cfe1ea2", 00:13:53.143 "is_configured": true, 00:13:53.143 "data_offset": 0, 00:13:53.143 "data_size": 65536 00:13:53.143 }, 00:13:53.143 { 00:13:53.143 "name": null, 00:13:53.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.143 "is_configured": false, 00:13:53.143 "data_offset": 0, 00:13:53.143 "data_size": 65536 00:13:53.143 }, 00:13:53.143 { 00:13:53.143 "name": "BaseBdev3", 00:13:53.143 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:53.143 "is_configured": true, 00:13:53.143 "data_offset": 0, 00:13:53.143 "data_size": 65536 00:13:53.143 }, 00:13:53.143 { 00:13:53.143 "name": "BaseBdev4", 00:13:53.143 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:53.143 "is_configured": true, 00:13:53.143 "data_offset": 0, 00:13:53.143 "data_size": 65536 00:13:53.143 } 00:13:53.143 ] 00:13:53.143 }' 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=443 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.143 "name": "raid_bdev1", 00:13:53.143 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:53.143 "strip_size_kb": 0, 00:13:53.143 "state": "online", 00:13:53.143 "raid_level": "raid1", 00:13:53.143 "superblock": false, 00:13:53.143 "num_base_bdevs": 4, 00:13:53.143 "num_base_bdevs_discovered": 3, 00:13:53.143 "num_base_bdevs_operational": 3, 00:13:53.143 "process": { 00:13:53.143 "type": "rebuild", 00:13:53.143 "target": "spare", 00:13:53.143 "progress": { 00:13:53.143 "blocks": 26624, 00:13:53.143 "percent": 40 00:13:53.143 } 00:13:53.143 }, 00:13:53.143 "base_bdevs_list": [ 00:13:53.143 { 00:13:53.143 "name": "spare", 00:13:53.143 "uuid": "298c8f75-efdc-5ae8-bbd9-f2d78cfe1ea2", 00:13:53.143 "is_configured": true, 00:13:53.143 "data_offset": 0, 00:13:53.143 "data_size": 65536 00:13:53.143 }, 00:13:53.143 { 00:13:53.143 "name": null, 00:13:53.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.143 "is_configured": false, 00:13:53.143 "data_offset": 0, 00:13:53.143 "data_size": 65536 00:13:53.143 }, 00:13:53.143 { 00:13:53.143 "name": "BaseBdev3", 00:13:53.143 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:53.143 "is_configured": true, 00:13:53.143 "data_offset": 0, 00:13:53.143 "data_size": 65536 00:13:53.143 }, 00:13:53.143 { 00:13:53.143 "name": "BaseBdev4", 00:13:53.143 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:53.143 "is_configured": true, 00:13:53.143 "data_offset": 0, 00:13:53.143 "data_size": 65536 00:13:53.143 } 00:13:53.143 ] 00:13:53.143 }' 00:13:53.143 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.402 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.402 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.402 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.402 04:02:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.343 "name": "raid_bdev1", 00:13:54.343 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:54.343 "strip_size_kb": 0, 00:13:54.343 "state": "online", 00:13:54.343 "raid_level": "raid1", 00:13:54.343 "superblock": false, 00:13:54.343 "num_base_bdevs": 4, 00:13:54.343 "num_base_bdevs_discovered": 3, 00:13:54.343 "num_base_bdevs_operational": 3, 00:13:54.343 "process": { 00:13:54.343 "type": "rebuild", 00:13:54.343 "target": "spare", 00:13:54.343 "progress": { 00:13:54.343 "blocks": 49152, 00:13:54.343 "percent": 75 00:13:54.343 } 00:13:54.343 }, 00:13:54.343 "base_bdevs_list": [ 00:13:54.343 { 00:13:54.343 "name": "spare", 00:13:54.343 "uuid": "298c8f75-efdc-5ae8-bbd9-f2d78cfe1ea2", 00:13:54.343 "is_configured": true, 00:13:54.343 "data_offset": 0, 00:13:54.343 "data_size": 65536 00:13:54.343 }, 00:13:54.343 { 00:13:54.343 "name": null, 00:13:54.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.343 "is_configured": false, 00:13:54.343 "data_offset": 0, 00:13:54.343 "data_size": 65536 00:13:54.343 }, 00:13:54.343 { 00:13:54.343 "name": "BaseBdev3", 00:13:54.343 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:54.343 "is_configured": true, 00:13:54.343 "data_offset": 0, 00:13:54.343 "data_size": 65536 00:13:54.343 }, 00:13:54.343 { 00:13:54.343 "name": "BaseBdev4", 00:13:54.343 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:54.343 "is_configured": true, 00:13:54.343 "data_offset": 0, 00:13:54.343 "data_size": 65536 00:13:54.343 } 00:13:54.343 ] 00:13:54.343 }' 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.343 04:02:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.604 04:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.604 04:02:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.175 [2024-11-18 04:02:51.600231] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:55.175 [2024-11-18 04:02:51.600295] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:55.175 [2024-11-18 04:02:51.600336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.435 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.435 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.435 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.435 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.435 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.435 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.435 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.435 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.435 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.435 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.435 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.696 "name": "raid_bdev1", 00:13:55.696 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:55.696 "strip_size_kb": 0, 00:13:55.696 "state": "online", 00:13:55.696 "raid_level": "raid1", 00:13:55.696 "superblock": false, 00:13:55.696 "num_base_bdevs": 4, 00:13:55.696 "num_base_bdevs_discovered": 3, 00:13:55.696 "num_base_bdevs_operational": 3, 00:13:55.696 "base_bdevs_list": [ 00:13:55.696 { 00:13:55.696 "name": "spare", 00:13:55.696 "uuid": "298c8f75-efdc-5ae8-bbd9-f2d78cfe1ea2", 00:13:55.696 "is_configured": true, 00:13:55.696 "data_offset": 0, 00:13:55.696 "data_size": 65536 00:13:55.696 }, 00:13:55.696 { 00:13:55.696 "name": null, 00:13:55.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.696 "is_configured": false, 00:13:55.696 "data_offset": 0, 00:13:55.696 "data_size": 65536 00:13:55.696 }, 00:13:55.696 { 00:13:55.696 "name": "BaseBdev3", 00:13:55.696 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:55.696 "is_configured": true, 00:13:55.696 "data_offset": 0, 00:13:55.696 "data_size": 65536 00:13:55.696 }, 00:13:55.696 { 00:13:55.696 "name": "BaseBdev4", 00:13:55.696 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:55.696 "is_configured": true, 00:13:55.696 "data_offset": 0, 00:13:55.696 "data_size": 65536 00:13:55.696 } 00:13:55.696 ] 00:13:55.696 }' 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.696 "name": "raid_bdev1", 00:13:55.696 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:55.696 "strip_size_kb": 0, 00:13:55.696 "state": "online", 00:13:55.696 "raid_level": "raid1", 00:13:55.696 "superblock": false, 00:13:55.696 "num_base_bdevs": 4, 00:13:55.696 "num_base_bdevs_discovered": 3, 00:13:55.696 "num_base_bdevs_operational": 3, 00:13:55.696 "base_bdevs_list": [ 00:13:55.696 { 00:13:55.696 "name": "spare", 00:13:55.696 "uuid": "298c8f75-efdc-5ae8-bbd9-f2d78cfe1ea2", 00:13:55.696 "is_configured": true, 00:13:55.696 "data_offset": 0, 00:13:55.696 "data_size": 65536 00:13:55.696 }, 00:13:55.696 { 00:13:55.696 "name": null, 00:13:55.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.696 "is_configured": false, 00:13:55.696 "data_offset": 0, 00:13:55.696 "data_size": 65536 00:13:55.696 }, 00:13:55.696 { 00:13:55.696 "name": "BaseBdev3", 00:13:55.696 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:55.696 "is_configured": true, 00:13:55.696 "data_offset": 0, 00:13:55.696 "data_size": 65536 00:13:55.696 }, 00:13:55.696 { 00:13:55.696 "name": "BaseBdev4", 00:13:55.696 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:55.696 "is_configured": true, 00:13:55.696 "data_offset": 0, 00:13:55.696 "data_size": 65536 00:13:55.696 } 00:13:55.696 ] 00:13:55.696 }' 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.696 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.696 "name": "raid_bdev1", 00:13:55.696 "uuid": "7f6bdcd8-613a-4910-ad23-409f2b40e26b", 00:13:55.696 "strip_size_kb": 0, 00:13:55.696 "state": "online", 00:13:55.697 "raid_level": "raid1", 00:13:55.697 "superblock": false, 00:13:55.697 "num_base_bdevs": 4, 00:13:55.697 "num_base_bdevs_discovered": 3, 00:13:55.697 "num_base_bdevs_operational": 3, 00:13:55.697 "base_bdevs_list": [ 00:13:55.697 { 00:13:55.697 "name": "spare", 00:13:55.697 "uuid": "298c8f75-efdc-5ae8-bbd9-f2d78cfe1ea2", 00:13:55.697 "is_configured": true, 00:13:55.697 "data_offset": 0, 00:13:55.697 "data_size": 65536 00:13:55.697 }, 00:13:55.697 { 00:13:55.697 "name": null, 00:13:55.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.697 "is_configured": false, 00:13:55.697 "data_offset": 0, 00:13:55.697 "data_size": 65536 00:13:55.697 }, 00:13:55.697 { 00:13:55.697 "name": "BaseBdev3", 00:13:55.697 "uuid": "b0a59afe-20b0-5a4c-96bb-337d124a1e6c", 00:13:55.697 "is_configured": true, 00:13:55.697 "data_offset": 0, 00:13:55.697 "data_size": 65536 00:13:55.697 }, 00:13:55.697 { 00:13:55.697 "name": "BaseBdev4", 00:13:55.697 "uuid": "1936ddfd-10b3-575c-abb7-1e147c6cba10", 00:13:55.697 "is_configured": true, 00:13:55.697 "data_offset": 0, 00:13:55.697 "data_size": 65536 00:13:55.697 } 00:13:55.697 ] 00:13:55.697 }' 00:13:55.697 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.697 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.268 [2024-11-18 04:02:52.734724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.268 [2024-11-18 04:02:52.734753] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.268 [2024-11-18 04:02:52.734845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.268 [2024-11-18 04:02:52.734921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.268 [2024-11-18 04:02:52.734930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:56.268 04:02:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:56.528 /dev/nbd0 00:13:56.528 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:56.528 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:56.528 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:56.528 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:56.528 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:56.528 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:56.528 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:56.528 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:56.528 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:56.528 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:56.529 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.529 1+0 records in 00:13:56.529 1+0 records out 00:13:56.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317033 s, 12.9 MB/s 00:13:56.529 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.529 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:56.529 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.529 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:56.529 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:56.529 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.529 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:56.529 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:56.789 /dev/nbd1 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.789 1+0 records in 00:13:56.789 1+0 records out 00:13:56.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332088 s, 12.3 MB/s 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:56.789 04:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:57.049 04:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:57.049 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.049 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:57.049 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.049 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:57.049 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.049 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:57.049 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:57.050 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:57.050 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:57.050 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.050 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.050 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:57.050 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:57.050 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.050 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.050 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77495 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77495 ']' 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77495 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77495 00:13:57.310 killing process with pid 77495 00:13:57.310 Received shutdown signal, test time was about 60.000000 seconds 00:13:57.310 00:13:57.310 Latency(us) 00:13:57.310 [2024-11-18T04:02:53.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.310 [2024-11-18T04:02:53.951Z] =================================================================================================================== 00:13:57.310 [2024-11-18T04:02:53.951Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77495' 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77495 00:13:57.310 [2024-11-18 04:02:53.934052] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.310 04:02:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77495 00:13:57.881 [2024-11-18 04:02:54.383312] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.821 04:02:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:58.821 00:13:58.821 real 0m16.470s 00:13:58.821 user 0m18.533s 00:13:58.821 sys 0m2.827s 00:13:58.821 04:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.821 04:02:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.821 ************************************ 00:13:58.821 END TEST raid_rebuild_test 00:13:58.821 ************************************ 00:13:59.082 04:02:55 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:59.082 04:02:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:59.082 04:02:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.082 04:02:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.082 ************************************ 00:13:59.082 START TEST raid_rebuild_test_sb 00:13:59.082 ************************************ 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77930 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77930 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77930 ']' 00:13:59.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.082 04:02:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.082 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:59.082 Zero copy mechanism will not be used. 00:13:59.082 [2024-11-18 04:02:55.578177] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:59.082 [2024-11-18 04:02:55.578301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77930 ] 00:13:59.343 [2024-11-18 04:02:55.751046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.343 [2024-11-18 04:02:55.856575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.603 [2024-11-18 04:02:56.047162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.603 [2024-11-18 04:02:56.047193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.863 BaseBdev1_malloc 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.863 [2024-11-18 04:02:56.439324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:59.863 [2024-11-18 04:02:56.439405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.863 [2024-11-18 04:02:56.439430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:59.863 [2024-11-18 04:02:56.439440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.863 [2024-11-18 04:02:56.441518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.863 [2024-11-18 04:02:56.441557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:59.863 BaseBdev1 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.863 BaseBdev2_malloc 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.863 [2024-11-18 04:02:56.493371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:59.863 [2024-11-18 04:02:56.493427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.863 [2024-11-18 04:02:56.493446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:59.863 [2024-11-18 04:02:56.493457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.863 [2024-11-18 04:02:56.495475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.863 [2024-11-18 04:02:56.495514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:59.863 BaseBdev2 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.863 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.124 BaseBdev3_malloc 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.124 [2024-11-18 04:02:56.579344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:00.124 [2024-11-18 04:02:56.579395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.124 [2024-11-18 04:02:56.579417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:00.124 [2024-11-18 04:02:56.579427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.124 [2024-11-18 04:02:56.581419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.124 [2024-11-18 04:02:56.581458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:00.124 BaseBdev3 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.124 BaseBdev4_malloc 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.124 [2024-11-18 04:02:56.632553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:00.124 [2024-11-18 04:02:56.632601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.124 [2024-11-18 04:02:56.632634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:00.124 [2024-11-18 04:02:56.632644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.124 [2024-11-18 04:02:56.634579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.124 [2024-11-18 04:02:56.634658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:00.124 BaseBdev4 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.124 spare_malloc 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.124 spare_delay 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.124 [2024-11-18 04:02:56.699554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:00.124 [2024-11-18 04:02:56.699609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.124 [2024-11-18 04:02:56.699628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:00.124 [2024-11-18 04:02:56.699638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.124 [2024-11-18 04:02:56.701673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.124 [2024-11-18 04:02:56.701710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:00.124 spare 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.124 [2024-11-18 04:02:56.711585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.124 [2024-11-18 04:02:56.713369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.124 [2024-11-18 04:02:56.713432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.124 [2024-11-18 04:02:56.713480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:00.124 [2024-11-18 04:02:56.713657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:00.124 [2024-11-18 04:02:56.713672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:00.124 [2024-11-18 04:02:56.713910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:00.124 [2024-11-18 04:02:56.714087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:00.124 [2024-11-18 04:02:56.714097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:00.124 [2024-11-18 04:02:56.714235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.124 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.384 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.384 "name": "raid_bdev1", 00:14:00.384 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:00.384 "strip_size_kb": 0, 00:14:00.384 "state": "online", 00:14:00.384 "raid_level": "raid1", 00:14:00.384 "superblock": true, 00:14:00.384 "num_base_bdevs": 4, 00:14:00.384 "num_base_bdevs_discovered": 4, 00:14:00.384 "num_base_bdevs_operational": 4, 00:14:00.384 "base_bdevs_list": [ 00:14:00.384 { 00:14:00.384 "name": "BaseBdev1", 00:14:00.384 "uuid": "b95f8daa-b10c-5c03-88e2-9766331cb85c", 00:14:00.384 "is_configured": true, 00:14:00.384 "data_offset": 2048, 00:14:00.384 "data_size": 63488 00:14:00.384 }, 00:14:00.384 { 00:14:00.384 "name": "BaseBdev2", 00:14:00.384 "uuid": "105725a2-ac87-5a87-909c-b6f39b550df0", 00:14:00.384 "is_configured": true, 00:14:00.384 "data_offset": 2048, 00:14:00.384 "data_size": 63488 00:14:00.384 }, 00:14:00.384 { 00:14:00.384 "name": "BaseBdev3", 00:14:00.384 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:00.384 "is_configured": true, 00:14:00.384 "data_offset": 2048, 00:14:00.384 "data_size": 63488 00:14:00.384 }, 00:14:00.384 { 00:14:00.384 "name": "BaseBdev4", 00:14:00.384 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:00.384 "is_configured": true, 00:14:00.384 "data_offset": 2048, 00:14:00.384 "data_size": 63488 00:14:00.384 } 00:14:00.384 ] 00:14:00.384 }' 00:14:00.384 04:02:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.384 04:02:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.644 [2024-11-18 04:02:57.175096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.644 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:00.904 [2024-11-18 04:02:57.434374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:00.904 /dev/nbd0 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.904 1+0 records in 00:14:00.904 1+0 records out 00:14:00.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529276 s, 7.7 MB/s 00:14:00.904 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.905 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:00.905 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.905 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.905 04:02:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:00.905 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.905 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.905 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:00.905 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:00.905 04:02:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:06.212 63488+0 records in 00:14:06.212 63488+0 records out 00:14:06.212 32505856 bytes (33 MB, 31 MiB) copied, 4.65097 s, 7.0 MB/s 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:06.212 [2024-11-18 04:03:02.375676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.212 [2024-11-18 04:03:02.387875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.212 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.213 04:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.213 04:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.213 04:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.213 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.213 "name": "raid_bdev1", 00:14:06.213 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:06.213 "strip_size_kb": 0, 00:14:06.213 "state": "online", 00:14:06.213 "raid_level": "raid1", 00:14:06.213 "superblock": true, 00:14:06.213 "num_base_bdevs": 4, 00:14:06.213 "num_base_bdevs_discovered": 3, 00:14:06.213 "num_base_bdevs_operational": 3, 00:14:06.213 "base_bdevs_list": [ 00:14:06.213 { 00:14:06.213 "name": null, 00:14:06.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.213 "is_configured": false, 00:14:06.213 "data_offset": 0, 00:14:06.213 "data_size": 63488 00:14:06.213 }, 00:14:06.213 { 00:14:06.213 "name": "BaseBdev2", 00:14:06.213 "uuid": "105725a2-ac87-5a87-909c-b6f39b550df0", 00:14:06.213 "is_configured": true, 00:14:06.213 "data_offset": 2048, 00:14:06.213 "data_size": 63488 00:14:06.213 }, 00:14:06.213 { 00:14:06.213 "name": "BaseBdev3", 00:14:06.213 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:06.213 "is_configured": true, 00:14:06.213 "data_offset": 2048, 00:14:06.213 "data_size": 63488 00:14:06.213 }, 00:14:06.213 { 00:14:06.213 "name": "BaseBdev4", 00:14:06.213 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:06.213 "is_configured": true, 00:14:06.213 "data_offset": 2048, 00:14:06.213 "data_size": 63488 00:14:06.213 } 00:14:06.213 ] 00:14:06.213 }' 00:14:06.213 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.213 04:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.213 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.213 04:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.213 04:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.213 [2024-11-18 04:03:02.831133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.213 [2024-11-18 04:03:02.846205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:06.213 04:03:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.213 04:03:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:06.213 [2024-11-18 04:03:02.848047] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.596 "name": "raid_bdev1", 00:14:07.596 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:07.596 "strip_size_kb": 0, 00:14:07.596 "state": "online", 00:14:07.596 "raid_level": "raid1", 00:14:07.596 "superblock": true, 00:14:07.596 "num_base_bdevs": 4, 00:14:07.596 "num_base_bdevs_discovered": 4, 00:14:07.596 "num_base_bdevs_operational": 4, 00:14:07.596 "process": { 00:14:07.596 "type": "rebuild", 00:14:07.596 "target": "spare", 00:14:07.596 "progress": { 00:14:07.596 "blocks": 20480, 00:14:07.596 "percent": 32 00:14:07.596 } 00:14:07.596 }, 00:14:07.596 "base_bdevs_list": [ 00:14:07.596 { 00:14:07.596 "name": "spare", 00:14:07.596 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:07.596 "is_configured": true, 00:14:07.596 "data_offset": 2048, 00:14:07.596 "data_size": 63488 00:14:07.596 }, 00:14:07.596 { 00:14:07.596 "name": "BaseBdev2", 00:14:07.596 "uuid": "105725a2-ac87-5a87-909c-b6f39b550df0", 00:14:07.596 "is_configured": true, 00:14:07.596 "data_offset": 2048, 00:14:07.596 "data_size": 63488 00:14:07.596 }, 00:14:07.596 { 00:14:07.596 "name": "BaseBdev3", 00:14:07.596 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:07.596 "is_configured": true, 00:14:07.596 "data_offset": 2048, 00:14:07.596 "data_size": 63488 00:14:07.596 }, 00:14:07.596 { 00:14:07.596 "name": "BaseBdev4", 00:14:07.596 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:07.596 "is_configured": true, 00:14:07.596 "data_offset": 2048, 00:14:07.596 "data_size": 63488 00:14:07.596 } 00:14:07.596 ] 00:14:07.596 }' 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:07.596 04:03:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.596 [2024-11-18 04:03:04.008062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.596 [2024-11-18 04:03:04.052456] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:07.596 [2024-11-18 04:03:04.052512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.596 [2024-11-18 04:03:04.052526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.596 [2024-11-18 04:03:04.052536] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.596 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.596 "name": "raid_bdev1", 00:14:07.596 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:07.596 "strip_size_kb": 0, 00:14:07.596 "state": "online", 00:14:07.596 "raid_level": "raid1", 00:14:07.596 "superblock": true, 00:14:07.596 "num_base_bdevs": 4, 00:14:07.596 "num_base_bdevs_discovered": 3, 00:14:07.596 "num_base_bdevs_operational": 3, 00:14:07.596 "base_bdevs_list": [ 00:14:07.596 { 00:14:07.596 "name": null, 00:14:07.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.596 "is_configured": false, 00:14:07.596 "data_offset": 0, 00:14:07.596 "data_size": 63488 00:14:07.597 }, 00:14:07.597 { 00:14:07.597 "name": "BaseBdev2", 00:14:07.597 "uuid": "105725a2-ac87-5a87-909c-b6f39b550df0", 00:14:07.597 "is_configured": true, 00:14:07.597 "data_offset": 2048, 00:14:07.597 "data_size": 63488 00:14:07.597 }, 00:14:07.597 { 00:14:07.597 "name": "BaseBdev3", 00:14:07.597 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:07.597 "is_configured": true, 00:14:07.597 "data_offset": 2048, 00:14:07.597 "data_size": 63488 00:14:07.597 }, 00:14:07.597 { 00:14:07.597 "name": "BaseBdev4", 00:14:07.597 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:07.597 "is_configured": true, 00:14:07.597 "data_offset": 2048, 00:14:07.597 "data_size": 63488 00:14:07.597 } 00:14:07.597 ] 00:14:07.597 }' 00:14:07.597 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.597 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.857 "name": "raid_bdev1", 00:14:07.857 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:07.857 "strip_size_kb": 0, 00:14:07.857 "state": "online", 00:14:07.857 "raid_level": "raid1", 00:14:07.857 "superblock": true, 00:14:07.857 "num_base_bdevs": 4, 00:14:07.857 "num_base_bdevs_discovered": 3, 00:14:07.857 "num_base_bdevs_operational": 3, 00:14:07.857 "base_bdevs_list": [ 00:14:07.857 { 00:14:07.857 "name": null, 00:14:07.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.857 "is_configured": false, 00:14:07.857 "data_offset": 0, 00:14:07.857 "data_size": 63488 00:14:07.857 }, 00:14:07.857 { 00:14:07.857 "name": "BaseBdev2", 00:14:07.857 "uuid": "105725a2-ac87-5a87-909c-b6f39b550df0", 00:14:07.857 "is_configured": true, 00:14:07.857 "data_offset": 2048, 00:14:07.857 "data_size": 63488 00:14:07.857 }, 00:14:07.857 { 00:14:07.857 "name": "BaseBdev3", 00:14:07.857 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:07.857 "is_configured": true, 00:14:07.857 "data_offset": 2048, 00:14:07.857 "data_size": 63488 00:14:07.857 }, 00:14:07.857 { 00:14:07.857 "name": "BaseBdev4", 00:14:07.857 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:07.857 "is_configured": true, 00:14:07.857 "data_offset": 2048, 00:14:07.857 "data_size": 63488 00:14:07.857 } 00:14:07.857 ] 00:14:07.857 }' 00:14:07.857 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.117 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.117 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.117 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.117 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.117 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.117 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.117 [2024-11-18 04:03:04.544247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.117 [2024-11-18 04:03:04.558457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:08.117 04:03:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.117 04:03:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:08.117 [2024-11-18 04:03:04.560218] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.057 "name": "raid_bdev1", 00:14:09.057 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:09.057 "strip_size_kb": 0, 00:14:09.057 "state": "online", 00:14:09.057 "raid_level": "raid1", 00:14:09.057 "superblock": true, 00:14:09.057 "num_base_bdevs": 4, 00:14:09.057 "num_base_bdevs_discovered": 4, 00:14:09.057 "num_base_bdevs_operational": 4, 00:14:09.057 "process": { 00:14:09.057 "type": "rebuild", 00:14:09.057 "target": "spare", 00:14:09.057 "progress": { 00:14:09.057 "blocks": 20480, 00:14:09.057 "percent": 32 00:14:09.057 } 00:14:09.057 }, 00:14:09.057 "base_bdevs_list": [ 00:14:09.057 { 00:14:09.057 "name": "spare", 00:14:09.057 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:09.057 "is_configured": true, 00:14:09.057 "data_offset": 2048, 00:14:09.057 "data_size": 63488 00:14:09.057 }, 00:14:09.057 { 00:14:09.057 "name": "BaseBdev2", 00:14:09.057 "uuid": "105725a2-ac87-5a87-909c-b6f39b550df0", 00:14:09.057 "is_configured": true, 00:14:09.057 "data_offset": 2048, 00:14:09.057 "data_size": 63488 00:14:09.057 }, 00:14:09.057 { 00:14:09.057 "name": "BaseBdev3", 00:14:09.057 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:09.057 "is_configured": true, 00:14:09.057 "data_offset": 2048, 00:14:09.057 "data_size": 63488 00:14:09.057 }, 00:14:09.057 { 00:14:09.057 "name": "BaseBdev4", 00:14:09.057 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:09.057 "is_configured": true, 00:14:09.057 "data_offset": 2048, 00:14:09.057 "data_size": 63488 00:14:09.057 } 00:14:09.057 ] 00:14:09.057 }' 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:09.057 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.057 04:03:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.317 [2024-11-18 04:03:05.699840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:09.317 [2024-11-18 04:03:05.864422] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:09.317 04:03:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.317 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:09.317 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:09.317 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.317 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.317 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.317 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.317 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.317 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.318 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.318 04:03:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.318 04:03:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.318 04:03:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.318 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.318 "name": "raid_bdev1", 00:14:09.318 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:09.318 "strip_size_kb": 0, 00:14:09.318 "state": "online", 00:14:09.318 "raid_level": "raid1", 00:14:09.318 "superblock": true, 00:14:09.318 "num_base_bdevs": 4, 00:14:09.318 "num_base_bdevs_discovered": 3, 00:14:09.318 "num_base_bdevs_operational": 3, 00:14:09.318 "process": { 00:14:09.318 "type": "rebuild", 00:14:09.318 "target": "spare", 00:14:09.318 "progress": { 00:14:09.318 "blocks": 24576, 00:14:09.318 "percent": 38 00:14:09.318 } 00:14:09.318 }, 00:14:09.318 "base_bdevs_list": [ 00:14:09.318 { 00:14:09.318 "name": "spare", 00:14:09.318 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:09.318 "is_configured": true, 00:14:09.318 "data_offset": 2048, 00:14:09.318 "data_size": 63488 00:14:09.318 }, 00:14:09.318 { 00:14:09.318 "name": null, 00:14:09.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.318 "is_configured": false, 00:14:09.318 "data_offset": 0, 00:14:09.318 "data_size": 63488 00:14:09.318 }, 00:14:09.318 { 00:14:09.318 "name": "BaseBdev3", 00:14:09.318 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:09.318 "is_configured": true, 00:14:09.318 "data_offset": 2048, 00:14:09.318 "data_size": 63488 00:14:09.318 }, 00:14:09.318 { 00:14:09.318 "name": "BaseBdev4", 00:14:09.318 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:09.318 "is_configured": true, 00:14:09.318 "data_offset": 2048, 00:14:09.318 "data_size": 63488 00:14:09.318 } 00:14:09.318 ] 00:14:09.318 }' 00:14:09.318 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.578 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.578 04:03:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=460 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.578 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.578 "name": "raid_bdev1", 00:14:09.578 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:09.578 "strip_size_kb": 0, 00:14:09.578 "state": "online", 00:14:09.578 "raid_level": "raid1", 00:14:09.578 "superblock": true, 00:14:09.578 "num_base_bdevs": 4, 00:14:09.578 "num_base_bdevs_discovered": 3, 00:14:09.578 "num_base_bdevs_operational": 3, 00:14:09.578 "process": { 00:14:09.578 "type": "rebuild", 00:14:09.578 "target": "spare", 00:14:09.578 "progress": { 00:14:09.578 "blocks": 26624, 00:14:09.578 "percent": 41 00:14:09.578 } 00:14:09.578 }, 00:14:09.578 "base_bdevs_list": [ 00:14:09.578 { 00:14:09.578 "name": "spare", 00:14:09.578 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:09.578 "is_configured": true, 00:14:09.578 "data_offset": 2048, 00:14:09.578 "data_size": 63488 00:14:09.578 }, 00:14:09.578 { 00:14:09.578 "name": null, 00:14:09.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.578 "is_configured": false, 00:14:09.578 "data_offset": 0, 00:14:09.578 "data_size": 63488 00:14:09.578 }, 00:14:09.578 { 00:14:09.578 "name": "BaseBdev3", 00:14:09.578 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:09.578 "is_configured": true, 00:14:09.578 "data_offset": 2048, 00:14:09.578 "data_size": 63488 00:14:09.578 }, 00:14:09.579 { 00:14:09.579 "name": "BaseBdev4", 00:14:09.579 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:09.579 "is_configured": true, 00:14:09.579 "data_offset": 2048, 00:14:09.579 "data_size": 63488 00:14:09.579 } 00:14:09.579 ] 00:14:09.579 }' 00:14:09.579 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.579 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.579 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.579 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.579 04:03:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:10.519 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.519 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.519 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.779 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.779 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.779 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.779 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.779 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.779 04:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.779 04:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.779 04:03:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.779 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.779 "name": "raid_bdev1", 00:14:10.779 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:10.779 "strip_size_kb": 0, 00:14:10.779 "state": "online", 00:14:10.779 "raid_level": "raid1", 00:14:10.779 "superblock": true, 00:14:10.779 "num_base_bdevs": 4, 00:14:10.779 "num_base_bdevs_discovered": 3, 00:14:10.779 "num_base_bdevs_operational": 3, 00:14:10.779 "process": { 00:14:10.779 "type": "rebuild", 00:14:10.779 "target": "spare", 00:14:10.779 "progress": { 00:14:10.779 "blocks": 51200, 00:14:10.780 "percent": 80 00:14:10.780 } 00:14:10.780 }, 00:14:10.780 "base_bdevs_list": [ 00:14:10.780 { 00:14:10.780 "name": "spare", 00:14:10.780 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:10.780 "is_configured": true, 00:14:10.780 "data_offset": 2048, 00:14:10.780 "data_size": 63488 00:14:10.780 }, 00:14:10.780 { 00:14:10.780 "name": null, 00:14:10.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.780 "is_configured": false, 00:14:10.780 "data_offset": 0, 00:14:10.780 "data_size": 63488 00:14:10.780 }, 00:14:10.780 { 00:14:10.780 "name": "BaseBdev3", 00:14:10.780 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:10.780 "is_configured": true, 00:14:10.780 "data_offset": 2048, 00:14:10.780 "data_size": 63488 00:14:10.780 }, 00:14:10.780 { 00:14:10.780 "name": "BaseBdev4", 00:14:10.780 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:10.780 "is_configured": true, 00:14:10.780 "data_offset": 2048, 00:14:10.780 "data_size": 63488 00:14:10.780 } 00:14:10.780 ] 00:14:10.780 }' 00:14:10.780 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.780 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.780 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.780 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.780 04:03:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.350 [2024-11-18 04:03:07.771457] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:11.350 [2024-11-18 04:03:07.771522] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:11.350 [2024-11-18 04:03:07.771616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.919 "name": "raid_bdev1", 00:14:11.919 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:11.919 "strip_size_kb": 0, 00:14:11.919 "state": "online", 00:14:11.919 "raid_level": "raid1", 00:14:11.919 "superblock": true, 00:14:11.919 "num_base_bdevs": 4, 00:14:11.919 "num_base_bdevs_discovered": 3, 00:14:11.919 "num_base_bdevs_operational": 3, 00:14:11.919 "base_bdevs_list": [ 00:14:11.919 { 00:14:11.919 "name": "spare", 00:14:11.919 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:11.919 "is_configured": true, 00:14:11.919 "data_offset": 2048, 00:14:11.919 "data_size": 63488 00:14:11.919 }, 00:14:11.919 { 00:14:11.919 "name": null, 00:14:11.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.919 "is_configured": false, 00:14:11.919 "data_offset": 0, 00:14:11.919 "data_size": 63488 00:14:11.919 }, 00:14:11.919 { 00:14:11.919 "name": "BaseBdev3", 00:14:11.919 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:11.919 "is_configured": true, 00:14:11.919 "data_offset": 2048, 00:14:11.919 "data_size": 63488 00:14:11.919 }, 00:14:11.919 { 00:14:11.919 "name": "BaseBdev4", 00:14:11.919 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:11.919 "is_configured": true, 00:14:11.919 "data_offset": 2048, 00:14:11.919 "data_size": 63488 00:14:11.919 } 00:14:11.919 ] 00:14:11.919 }' 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.919 "name": "raid_bdev1", 00:14:11.919 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:11.919 "strip_size_kb": 0, 00:14:11.919 "state": "online", 00:14:11.919 "raid_level": "raid1", 00:14:11.919 "superblock": true, 00:14:11.919 "num_base_bdevs": 4, 00:14:11.919 "num_base_bdevs_discovered": 3, 00:14:11.919 "num_base_bdevs_operational": 3, 00:14:11.919 "base_bdevs_list": [ 00:14:11.919 { 00:14:11.919 "name": "spare", 00:14:11.919 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:11.919 "is_configured": true, 00:14:11.919 "data_offset": 2048, 00:14:11.919 "data_size": 63488 00:14:11.919 }, 00:14:11.919 { 00:14:11.919 "name": null, 00:14:11.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.919 "is_configured": false, 00:14:11.919 "data_offset": 0, 00:14:11.919 "data_size": 63488 00:14:11.919 }, 00:14:11.919 { 00:14:11.919 "name": "BaseBdev3", 00:14:11.919 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:11.919 "is_configured": true, 00:14:11.919 "data_offset": 2048, 00:14:11.919 "data_size": 63488 00:14:11.919 }, 00:14:11.919 { 00:14:11.919 "name": "BaseBdev4", 00:14:11.919 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:11.919 "is_configured": true, 00:14:11.919 "data_offset": 2048, 00:14:11.919 "data_size": 63488 00:14:11.919 } 00:14:11.919 ] 00:14:11.919 }' 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.919 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.179 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.179 "name": "raid_bdev1", 00:14:12.179 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:12.179 "strip_size_kb": 0, 00:14:12.179 "state": "online", 00:14:12.179 "raid_level": "raid1", 00:14:12.179 "superblock": true, 00:14:12.179 "num_base_bdevs": 4, 00:14:12.179 "num_base_bdevs_discovered": 3, 00:14:12.179 "num_base_bdevs_operational": 3, 00:14:12.179 "base_bdevs_list": [ 00:14:12.179 { 00:14:12.179 "name": "spare", 00:14:12.179 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:12.179 "is_configured": true, 00:14:12.179 "data_offset": 2048, 00:14:12.179 "data_size": 63488 00:14:12.179 }, 00:14:12.179 { 00:14:12.179 "name": null, 00:14:12.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.179 "is_configured": false, 00:14:12.179 "data_offset": 0, 00:14:12.179 "data_size": 63488 00:14:12.179 }, 00:14:12.179 { 00:14:12.179 "name": "BaseBdev3", 00:14:12.179 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:12.179 "is_configured": true, 00:14:12.179 "data_offset": 2048, 00:14:12.179 "data_size": 63488 00:14:12.179 }, 00:14:12.179 { 00:14:12.179 "name": "BaseBdev4", 00:14:12.179 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:12.179 "is_configured": true, 00:14:12.179 "data_offset": 2048, 00:14:12.179 "data_size": 63488 00:14:12.179 } 00:14:12.179 ] 00:14:12.179 }' 00:14:12.179 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.179 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.440 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:12.440 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.440 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.440 [2024-11-18 04:03:08.965140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:12.440 [2024-11-18 04:03:08.965171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.440 [2024-11-18 04:03:08.965253] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.440 [2024-11-18 04:03:08.965331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.440 [2024-11-18 04:03:08.965345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:12.440 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.440 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.440 04:03:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:12.440 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.440 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.440 04:03:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:12.440 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:12.700 /dev/nbd0 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.700 1+0 records in 00:14:12.700 1+0 records out 00:14:12.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386687 s, 10.6 MB/s 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:12.700 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:12.960 /dev/nbd1 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.960 1+0 records in 00:14:12.960 1+0 records out 00:14:12.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420084 s, 9.8 MB/s 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:12.960 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:13.220 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:13.220 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.220 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:13.220 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:13.220 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:13.220 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.220 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:13.479 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:13.479 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:13.479 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:13.479 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.479 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.479 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:13.479 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:13.479 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.479 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:13.479 04:03:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.742 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.742 [2024-11-18 04:03:10.153663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:13.742 [2024-11-18 04:03:10.153715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.742 [2024-11-18 04:03:10.153738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:13.742 [2024-11-18 04:03:10.153746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.742 [2024-11-18 04:03:10.155873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.742 [2024-11-18 04:03:10.155951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:13.743 [2024-11-18 04:03:10.156072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:13.743 [2024-11-18 04:03:10.156151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.743 [2024-11-18 04:03:10.156321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.743 [2024-11-18 04:03:10.156461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:13.743 spare 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.743 [2024-11-18 04:03:10.256386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:13.743 [2024-11-18 04:03:10.256442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:13.743 [2024-11-18 04:03:10.256750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:13.743 [2024-11-18 04:03:10.256972] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:13.743 [2024-11-18 04:03:10.257019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:13.743 [2024-11-18 04:03:10.257212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.743 "name": "raid_bdev1", 00:14:13.743 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:13.743 "strip_size_kb": 0, 00:14:13.743 "state": "online", 00:14:13.743 "raid_level": "raid1", 00:14:13.743 "superblock": true, 00:14:13.743 "num_base_bdevs": 4, 00:14:13.743 "num_base_bdevs_discovered": 3, 00:14:13.743 "num_base_bdevs_operational": 3, 00:14:13.743 "base_bdevs_list": [ 00:14:13.743 { 00:14:13.743 "name": "spare", 00:14:13.743 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:13.743 "is_configured": true, 00:14:13.743 "data_offset": 2048, 00:14:13.743 "data_size": 63488 00:14:13.743 }, 00:14:13.743 { 00:14:13.743 "name": null, 00:14:13.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.743 "is_configured": false, 00:14:13.743 "data_offset": 2048, 00:14:13.743 "data_size": 63488 00:14:13.743 }, 00:14:13.743 { 00:14:13.743 "name": "BaseBdev3", 00:14:13.743 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:13.743 "is_configured": true, 00:14:13.743 "data_offset": 2048, 00:14:13.743 "data_size": 63488 00:14:13.743 }, 00:14:13.743 { 00:14:13.743 "name": "BaseBdev4", 00:14:13.743 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:13.743 "is_configured": true, 00:14:13.743 "data_offset": 2048, 00:14:13.743 "data_size": 63488 00:14:13.743 } 00:14:13.743 ] 00:14:13.743 }' 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.743 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.314 "name": "raid_bdev1", 00:14:14.314 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:14.314 "strip_size_kb": 0, 00:14:14.314 "state": "online", 00:14:14.314 "raid_level": "raid1", 00:14:14.314 "superblock": true, 00:14:14.314 "num_base_bdevs": 4, 00:14:14.314 "num_base_bdevs_discovered": 3, 00:14:14.314 "num_base_bdevs_operational": 3, 00:14:14.314 "base_bdevs_list": [ 00:14:14.314 { 00:14:14.314 "name": "spare", 00:14:14.314 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:14.314 "is_configured": true, 00:14:14.314 "data_offset": 2048, 00:14:14.314 "data_size": 63488 00:14:14.314 }, 00:14:14.314 { 00:14:14.314 "name": null, 00:14:14.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.314 "is_configured": false, 00:14:14.314 "data_offset": 2048, 00:14:14.314 "data_size": 63488 00:14:14.314 }, 00:14:14.314 { 00:14:14.314 "name": "BaseBdev3", 00:14:14.314 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:14.314 "is_configured": true, 00:14:14.314 "data_offset": 2048, 00:14:14.314 "data_size": 63488 00:14:14.314 }, 00:14:14.314 { 00:14:14.314 "name": "BaseBdev4", 00:14:14.314 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:14.314 "is_configured": true, 00:14:14.314 "data_offset": 2048, 00:14:14.314 "data_size": 63488 00:14:14.314 } 00:14:14.314 ] 00:14:14.314 }' 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.314 [2024-11-18 04:03:10.848535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.314 "name": "raid_bdev1", 00:14:14.314 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:14.314 "strip_size_kb": 0, 00:14:14.314 "state": "online", 00:14:14.314 "raid_level": "raid1", 00:14:14.314 "superblock": true, 00:14:14.314 "num_base_bdevs": 4, 00:14:14.314 "num_base_bdevs_discovered": 2, 00:14:14.314 "num_base_bdevs_operational": 2, 00:14:14.314 "base_bdevs_list": [ 00:14:14.314 { 00:14:14.314 "name": null, 00:14:14.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.314 "is_configured": false, 00:14:14.314 "data_offset": 0, 00:14:14.314 "data_size": 63488 00:14:14.314 }, 00:14:14.314 { 00:14:14.314 "name": null, 00:14:14.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.314 "is_configured": false, 00:14:14.314 "data_offset": 2048, 00:14:14.314 "data_size": 63488 00:14:14.314 }, 00:14:14.314 { 00:14:14.314 "name": "BaseBdev3", 00:14:14.314 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:14.314 "is_configured": true, 00:14:14.314 "data_offset": 2048, 00:14:14.314 "data_size": 63488 00:14:14.314 }, 00:14:14.314 { 00:14:14.314 "name": "BaseBdev4", 00:14:14.314 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:14.314 "is_configured": true, 00:14:14.314 "data_offset": 2048, 00:14:14.314 "data_size": 63488 00:14:14.314 } 00:14:14.314 ] 00:14:14.314 }' 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.314 04:03:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.884 04:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.884 04:03:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.884 04:03:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.884 [2024-11-18 04:03:11.307832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.884 [2024-11-18 04:03:11.308044] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:14.884 [2024-11-18 04:03:11.308097] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:14.884 [2024-11-18 04:03:11.308148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.884 [2024-11-18 04:03:11.322214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:14.884 04:03:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.884 04:03:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:14.884 [2024-11-18 04:03:11.324019] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.825 "name": "raid_bdev1", 00:14:15.825 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:15.825 "strip_size_kb": 0, 00:14:15.825 "state": "online", 00:14:15.825 "raid_level": "raid1", 00:14:15.825 "superblock": true, 00:14:15.825 "num_base_bdevs": 4, 00:14:15.825 "num_base_bdevs_discovered": 3, 00:14:15.825 "num_base_bdevs_operational": 3, 00:14:15.825 "process": { 00:14:15.825 "type": "rebuild", 00:14:15.825 "target": "spare", 00:14:15.825 "progress": { 00:14:15.825 "blocks": 20480, 00:14:15.825 "percent": 32 00:14:15.825 } 00:14:15.825 }, 00:14:15.825 "base_bdevs_list": [ 00:14:15.825 { 00:14:15.825 "name": "spare", 00:14:15.825 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:15.825 "is_configured": true, 00:14:15.825 "data_offset": 2048, 00:14:15.825 "data_size": 63488 00:14:15.825 }, 00:14:15.825 { 00:14:15.825 "name": null, 00:14:15.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.825 "is_configured": false, 00:14:15.825 "data_offset": 2048, 00:14:15.825 "data_size": 63488 00:14:15.825 }, 00:14:15.825 { 00:14:15.825 "name": "BaseBdev3", 00:14:15.825 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:15.825 "is_configured": true, 00:14:15.825 "data_offset": 2048, 00:14:15.825 "data_size": 63488 00:14:15.825 }, 00:14:15.825 { 00:14:15.825 "name": "BaseBdev4", 00:14:15.825 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:15.825 "is_configured": true, 00:14:15.825 "data_offset": 2048, 00:14:15.825 "data_size": 63488 00:14:15.825 } 00:14:15.825 ] 00:14:15.825 }' 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.825 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.085 [2024-11-18 04:03:12.488130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.085 [2024-11-18 04:03:12.528477] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:16.085 [2024-11-18 04:03:12.528527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.085 [2024-11-18 04:03:12.528560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.085 [2024-11-18 04:03:12.528568] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.085 "name": "raid_bdev1", 00:14:16.085 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:16.085 "strip_size_kb": 0, 00:14:16.085 "state": "online", 00:14:16.085 "raid_level": "raid1", 00:14:16.085 "superblock": true, 00:14:16.085 "num_base_bdevs": 4, 00:14:16.085 "num_base_bdevs_discovered": 2, 00:14:16.085 "num_base_bdevs_operational": 2, 00:14:16.085 "base_bdevs_list": [ 00:14:16.085 { 00:14:16.085 "name": null, 00:14:16.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.085 "is_configured": false, 00:14:16.085 "data_offset": 0, 00:14:16.085 "data_size": 63488 00:14:16.085 }, 00:14:16.085 { 00:14:16.085 "name": null, 00:14:16.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.085 "is_configured": false, 00:14:16.085 "data_offset": 2048, 00:14:16.085 "data_size": 63488 00:14:16.085 }, 00:14:16.085 { 00:14:16.085 "name": "BaseBdev3", 00:14:16.085 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:16.085 "is_configured": true, 00:14:16.085 "data_offset": 2048, 00:14:16.085 "data_size": 63488 00:14:16.085 }, 00:14:16.085 { 00:14:16.085 "name": "BaseBdev4", 00:14:16.085 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:16.085 "is_configured": true, 00:14:16.085 "data_offset": 2048, 00:14:16.085 "data_size": 63488 00:14:16.085 } 00:14:16.085 ] 00:14:16.085 }' 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.085 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.345 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:16.345 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.345 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.345 [2024-11-18 04:03:12.956050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:16.345 [2024-11-18 04:03:12.956108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.345 [2024-11-18 04:03:12.956135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:16.345 [2024-11-18 04:03:12.956144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.345 [2024-11-18 04:03:12.956594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.345 [2024-11-18 04:03:12.956624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:16.346 [2024-11-18 04:03:12.956718] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:16.346 [2024-11-18 04:03:12.956730] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:16.346 [2024-11-18 04:03:12.956747] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:16.346 [2024-11-18 04:03:12.956776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.346 [2024-11-18 04:03:12.970592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:16.346 spare 00:14:16.346 04:03:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.346 04:03:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:16.346 [2024-11-18 04:03:12.972399] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.729 04:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.729 04:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.729 04:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.729 04:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.729 04:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.729 04:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.729 04:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.729 04:03:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.729 04:03:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.729 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.729 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.729 "name": "raid_bdev1", 00:14:17.729 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:17.729 "strip_size_kb": 0, 00:14:17.729 "state": "online", 00:14:17.729 "raid_level": "raid1", 00:14:17.729 "superblock": true, 00:14:17.729 "num_base_bdevs": 4, 00:14:17.729 "num_base_bdevs_discovered": 3, 00:14:17.729 "num_base_bdevs_operational": 3, 00:14:17.729 "process": { 00:14:17.729 "type": "rebuild", 00:14:17.729 "target": "spare", 00:14:17.729 "progress": { 00:14:17.729 "blocks": 20480, 00:14:17.729 "percent": 32 00:14:17.729 } 00:14:17.729 }, 00:14:17.729 "base_bdevs_list": [ 00:14:17.729 { 00:14:17.729 "name": "spare", 00:14:17.729 "uuid": "995c04ef-5328-58fb-949b-d845536c407f", 00:14:17.729 "is_configured": true, 00:14:17.729 "data_offset": 2048, 00:14:17.729 "data_size": 63488 00:14:17.729 }, 00:14:17.729 { 00:14:17.729 "name": null, 00:14:17.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.729 "is_configured": false, 00:14:17.729 "data_offset": 2048, 00:14:17.729 "data_size": 63488 00:14:17.729 }, 00:14:17.729 { 00:14:17.729 "name": "BaseBdev3", 00:14:17.729 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:17.729 "is_configured": true, 00:14:17.729 "data_offset": 2048, 00:14:17.729 "data_size": 63488 00:14:17.729 }, 00:14:17.729 { 00:14:17.729 "name": "BaseBdev4", 00:14:17.729 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:17.729 "is_configured": true, 00:14:17.729 "data_offset": 2048, 00:14:17.729 "data_size": 63488 00:14:17.729 } 00:14:17.729 ] 00:14:17.729 }' 00:14:17.729 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.729 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.729 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.729 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.729 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:17.729 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.729 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.729 [2024-11-18 04:03:14.107897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:17.729 [2024-11-18 04:03:14.176960] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:17.729 [2024-11-18 04:03:14.177016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.729 [2024-11-18 04:03:14.177031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:17.729 [2024-11-18 04:03:14.177039] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:17.729 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.729 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.730 "name": "raid_bdev1", 00:14:17.730 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:17.730 "strip_size_kb": 0, 00:14:17.730 "state": "online", 00:14:17.730 "raid_level": "raid1", 00:14:17.730 "superblock": true, 00:14:17.730 "num_base_bdevs": 4, 00:14:17.730 "num_base_bdevs_discovered": 2, 00:14:17.730 "num_base_bdevs_operational": 2, 00:14:17.730 "base_bdevs_list": [ 00:14:17.730 { 00:14:17.730 "name": null, 00:14:17.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.730 "is_configured": false, 00:14:17.730 "data_offset": 0, 00:14:17.730 "data_size": 63488 00:14:17.730 }, 00:14:17.730 { 00:14:17.730 "name": null, 00:14:17.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.730 "is_configured": false, 00:14:17.730 "data_offset": 2048, 00:14:17.730 "data_size": 63488 00:14:17.730 }, 00:14:17.730 { 00:14:17.730 "name": "BaseBdev3", 00:14:17.730 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:17.730 "is_configured": true, 00:14:17.730 "data_offset": 2048, 00:14:17.730 "data_size": 63488 00:14:17.730 }, 00:14:17.730 { 00:14:17.730 "name": "BaseBdev4", 00:14:17.730 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:17.730 "is_configured": true, 00:14:17.730 "data_offset": 2048, 00:14:17.730 "data_size": 63488 00:14:17.730 } 00:14:17.730 ] 00:14:17.730 }' 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.730 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.991 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.991 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.991 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.991 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.991 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.991 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.991 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.991 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.991 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.252 "name": "raid_bdev1", 00:14:18.252 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:18.252 "strip_size_kb": 0, 00:14:18.252 "state": "online", 00:14:18.252 "raid_level": "raid1", 00:14:18.252 "superblock": true, 00:14:18.252 "num_base_bdevs": 4, 00:14:18.252 "num_base_bdevs_discovered": 2, 00:14:18.252 "num_base_bdevs_operational": 2, 00:14:18.252 "base_bdevs_list": [ 00:14:18.252 { 00:14:18.252 "name": null, 00:14:18.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.252 "is_configured": false, 00:14:18.252 "data_offset": 0, 00:14:18.252 "data_size": 63488 00:14:18.252 }, 00:14:18.252 { 00:14:18.252 "name": null, 00:14:18.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.252 "is_configured": false, 00:14:18.252 "data_offset": 2048, 00:14:18.252 "data_size": 63488 00:14:18.252 }, 00:14:18.252 { 00:14:18.252 "name": "BaseBdev3", 00:14:18.252 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:18.252 "is_configured": true, 00:14:18.252 "data_offset": 2048, 00:14:18.252 "data_size": 63488 00:14:18.252 }, 00:14:18.252 { 00:14:18.252 "name": "BaseBdev4", 00:14:18.252 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:18.252 "is_configured": true, 00:14:18.252 "data_offset": 2048, 00:14:18.252 "data_size": 63488 00:14:18.252 } 00:14:18.252 ] 00:14:18.252 }' 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.252 [2024-11-18 04:03:14.764447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:18.252 [2024-11-18 04:03:14.764543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.252 [2024-11-18 04:03:14.764567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:18.252 [2024-11-18 04:03:14.764578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.252 [2024-11-18 04:03:14.765014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.252 [2024-11-18 04:03:14.765038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:18.252 [2024-11-18 04:03:14.765112] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:18.252 [2024-11-18 04:03:14.765128] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:18.252 [2024-11-18 04:03:14.765136] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:18.252 [2024-11-18 04:03:14.765159] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:18.252 BaseBdev1 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.252 04:03:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.194 "name": "raid_bdev1", 00:14:19.194 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:19.194 "strip_size_kb": 0, 00:14:19.194 "state": "online", 00:14:19.194 "raid_level": "raid1", 00:14:19.194 "superblock": true, 00:14:19.194 "num_base_bdevs": 4, 00:14:19.194 "num_base_bdevs_discovered": 2, 00:14:19.194 "num_base_bdevs_operational": 2, 00:14:19.194 "base_bdevs_list": [ 00:14:19.194 { 00:14:19.194 "name": null, 00:14:19.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.194 "is_configured": false, 00:14:19.194 "data_offset": 0, 00:14:19.194 "data_size": 63488 00:14:19.194 }, 00:14:19.194 { 00:14:19.194 "name": null, 00:14:19.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.194 "is_configured": false, 00:14:19.194 "data_offset": 2048, 00:14:19.194 "data_size": 63488 00:14:19.194 }, 00:14:19.194 { 00:14:19.194 "name": "BaseBdev3", 00:14:19.194 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:19.194 "is_configured": true, 00:14:19.194 "data_offset": 2048, 00:14:19.194 "data_size": 63488 00:14:19.194 }, 00:14:19.194 { 00:14:19.194 "name": "BaseBdev4", 00:14:19.194 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:19.194 "is_configured": true, 00:14:19.194 "data_offset": 2048, 00:14:19.194 "data_size": 63488 00:14:19.194 } 00:14:19.194 ] 00:14:19.194 }' 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.194 04:03:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.765 "name": "raid_bdev1", 00:14:19.765 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:19.765 "strip_size_kb": 0, 00:14:19.765 "state": "online", 00:14:19.765 "raid_level": "raid1", 00:14:19.765 "superblock": true, 00:14:19.765 "num_base_bdevs": 4, 00:14:19.765 "num_base_bdevs_discovered": 2, 00:14:19.765 "num_base_bdevs_operational": 2, 00:14:19.765 "base_bdevs_list": [ 00:14:19.765 { 00:14:19.765 "name": null, 00:14:19.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.765 "is_configured": false, 00:14:19.765 "data_offset": 0, 00:14:19.765 "data_size": 63488 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "name": null, 00:14:19.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.765 "is_configured": false, 00:14:19.765 "data_offset": 2048, 00:14:19.765 "data_size": 63488 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "name": "BaseBdev3", 00:14:19.765 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:19.765 "is_configured": true, 00:14:19.765 "data_offset": 2048, 00:14:19.765 "data_size": 63488 00:14:19.765 }, 00:14:19.765 { 00:14:19.765 "name": "BaseBdev4", 00:14:19.765 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:19.765 "is_configured": true, 00:14:19.765 "data_offset": 2048, 00:14:19.765 "data_size": 63488 00:14:19.765 } 00:14:19.765 ] 00:14:19.765 }' 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.765 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.765 [2024-11-18 04:03:16.401690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.765 [2024-11-18 04:03:16.401899] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:19.765 [2024-11-18 04:03:16.401915] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:20.025 request: 00:14:20.025 { 00:14:20.025 "base_bdev": "BaseBdev1", 00:14:20.025 "raid_bdev": "raid_bdev1", 00:14:20.025 "method": "bdev_raid_add_base_bdev", 00:14:20.025 "req_id": 1 00:14:20.025 } 00:14:20.025 Got JSON-RPC error response 00:14:20.025 response: 00:14:20.025 { 00:14:20.025 "code": -22, 00:14:20.025 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:20.025 } 00:14:20.025 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:20.025 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:20.025 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.025 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.025 04:03:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.025 04:03:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.966 "name": "raid_bdev1", 00:14:20.966 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:20.966 "strip_size_kb": 0, 00:14:20.966 "state": "online", 00:14:20.966 "raid_level": "raid1", 00:14:20.966 "superblock": true, 00:14:20.966 "num_base_bdevs": 4, 00:14:20.966 "num_base_bdevs_discovered": 2, 00:14:20.966 "num_base_bdevs_operational": 2, 00:14:20.966 "base_bdevs_list": [ 00:14:20.966 { 00:14:20.966 "name": null, 00:14:20.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.966 "is_configured": false, 00:14:20.966 "data_offset": 0, 00:14:20.966 "data_size": 63488 00:14:20.966 }, 00:14:20.966 { 00:14:20.966 "name": null, 00:14:20.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.966 "is_configured": false, 00:14:20.966 "data_offset": 2048, 00:14:20.966 "data_size": 63488 00:14:20.966 }, 00:14:20.966 { 00:14:20.966 "name": "BaseBdev3", 00:14:20.966 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:20.966 "is_configured": true, 00:14:20.966 "data_offset": 2048, 00:14:20.966 "data_size": 63488 00:14:20.966 }, 00:14:20.966 { 00:14:20.966 "name": "BaseBdev4", 00:14:20.966 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:20.966 "is_configured": true, 00:14:20.966 "data_offset": 2048, 00:14:20.966 "data_size": 63488 00:14:20.966 } 00:14:20.966 ] 00:14:20.966 }' 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.966 04:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.227 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.227 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.227 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.227 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.227 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.227 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.227 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.227 04:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.227 04:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.488 04:03:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.488 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.488 "name": "raid_bdev1", 00:14:21.488 "uuid": "4a65c6fb-7a21-454e-8769-a251564f0a8a", 00:14:21.488 "strip_size_kb": 0, 00:14:21.488 "state": "online", 00:14:21.488 "raid_level": "raid1", 00:14:21.488 "superblock": true, 00:14:21.488 "num_base_bdevs": 4, 00:14:21.488 "num_base_bdevs_discovered": 2, 00:14:21.488 "num_base_bdevs_operational": 2, 00:14:21.488 "base_bdevs_list": [ 00:14:21.488 { 00:14:21.488 "name": null, 00:14:21.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.488 "is_configured": false, 00:14:21.488 "data_offset": 0, 00:14:21.488 "data_size": 63488 00:14:21.488 }, 00:14:21.488 { 00:14:21.488 "name": null, 00:14:21.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.488 "is_configured": false, 00:14:21.488 "data_offset": 2048, 00:14:21.488 "data_size": 63488 00:14:21.488 }, 00:14:21.488 { 00:14:21.488 "name": "BaseBdev3", 00:14:21.488 "uuid": "b6e7b1bf-4fc2-5b4f-986a-615095369bf2", 00:14:21.488 "is_configured": true, 00:14:21.488 "data_offset": 2048, 00:14:21.488 "data_size": 63488 00:14:21.488 }, 00:14:21.488 { 00:14:21.488 "name": "BaseBdev4", 00:14:21.488 "uuid": "788f5c32-ce5a-5f0b-be02-cdb7c487c04e", 00:14:21.488 "is_configured": true, 00:14:21.488 "data_offset": 2048, 00:14:21.488 "data_size": 63488 00:14:21.488 } 00:14:21.488 ] 00:14:21.488 }' 00:14:21.488 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.488 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.488 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.488 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.488 04:03:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77930 00:14:21.488 04:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77930 ']' 00:14:21.488 04:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77930 00:14:21.488 04:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:21.488 04:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.488 04:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77930 00:14:21.488 killing process with pid 77930 00:14:21.488 Received shutdown signal, test time was about 60.000000 seconds 00:14:21.488 00:14:21.488 Latency(us) 00:14:21.488 [2024-11-18T04:03:18.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.488 [2024-11-18T04:03:18.129Z] =================================================================================================================== 00:14:21.488 [2024-11-18T04:03:18.129Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:21.488 04:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.488 04:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.488 04:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77930' 00:14:21.488 04:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77930 00:14:21.488 [2024-11-18 04:03:18.041912] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.488 [2024-11-18 04:03:18.042024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.488 [2024-11-18 04:03:18.042088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.488 [2024-11-18 04:03:18.042097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:21.488 04:03:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77930 00:14:22.058 [2024-11-18 04:03:18.502963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:23.027 00:14:23.027 real 0m24.044s 00:14:23.027 user 0m29.484s 00:14:23.027 sys 0m3.374s 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.027 ************************************ 00:14:23.027 END TEST raid_rebuild_test_sb 00:14:23.027 ************************************ 00:14:23.027 04:03:19 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:23.027 04:03:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:23.027 04:03:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.027 04:03:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:23.027 ************************************ 00:14:23.027 START TEST raid_rebuild_test_io 00:14:23.027 ************************************ 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.027 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78678 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78678 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78678 ']' 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.028 04:03:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.288 [2024-11-18 04:03:19.698354] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:23.288 [2024-11-18 04:03:19.698560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:23.288 Zero copy mechanism will not be used. 00:14:23.288 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78678 ] 00:14:23.288 [2024-11-18 04:03:19.873749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.548 [2024-11-18 04:03:19.975958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.548 [2024-11-18 04:03:20.159945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.548 [2024-11-18 04:03:20.160069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.118 BaseBdev1_malloc 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.118 [2024-11-18 04:03:20.556315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:24.118 [2024-11-18 04:03:20.556383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.118 [2024-11-18 04:03:20.556407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:24.118 [2024-11-18 04:03:20.556418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.118 [2024-11-18 04:03:20.558425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.118 [2024-11-18 04:03:20.558465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:24.118 BaseBdev1 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.118 BaseBdev2_malloc 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.118 [2024-11-18 04:03:20.611198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:24.118 [2024-11-18 04:03:20.611265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.118 [2024-11-18 04:03:20.611283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:24.118 [2024-11-18 04:03:20.611292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.118 [2024-11-18 04:03:20.613271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.118 [2024-11-18 04:03:20.613307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:24.118 BaseBdev2 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.118 BaseBdev3_malloc 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.118 [2024-11-18 04:03:20.700514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:24.118 [2024-11-18 04:03:20.700564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.118 [2024-11-18 04:03:20.700585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:24.118 [2024-11-18 04:03:20.700595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.118 [2024-11-18 04:03:20.702538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.118 [2024-11-18 04:03:20.702617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:24.118 BaseBdev3 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.118 BaseBdev4_malloc 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.118 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.118 [2024-11-18 04:03:20.754060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:24.118 [2024-11-18 04:03:20.754105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.119 [2024-11-18 04:03:20.754131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:24.119 [2024-11-18 04:03:20.754157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.119 [2024-11-18 04:03:20.756086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.119 [2024-11-18 04:03:20.756124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:24.379 BaseBdev4 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.379 spare_malloc 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.379 spare_delay 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.379 [2024-11-18 04:03:20.818459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:24.379 [2024-11-18 04:03:20.818508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.379 [2024-11-18 04:03:20.818541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:24.379 [2024-11-18 04:03:20.818551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.379 [2024-11-18 04:03:20.820521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.379 [2024-11-18 04:03:20.820558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:24.379 spare 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.379 [2024-11-18 04:03:20.830484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.379 [2024-11-18 04:03:20.832253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.379 [2024-11-18 04:03:20.832313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.379 [2024-11-18 04:03:20.832360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:24.379 [2024-11-18 04:03:20.832428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:24.379 [2024-11-18 04:03:20.832440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:24.379 [2024-11-18 04:03:20.832662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:24.379 [2024-11-18 04:03:20.832807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:24.379 [2024-11-18 04:03:20.832819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:24.379 [2024-11-18 04:03:20.832975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.379 "name": "raid_bdev1", 00:14:24.379 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:24.379 "strip_size_kb": 0, 00:14:24.379 "state": "online", 00:14:24.379 "raid_level": "raid1", 00:14:24.379 "superblock": false, 00:14:24.379 "num_base_bdevs": 4, 00:14:24.379 "num_base_bdevs_discovered": 4, 00:14:24.379 "num_base_bdevs_operational": 4, 00:14:24.379 "base_bdevs_list": [ 00:14:24.379 { 00:14:24.379 "name": "BaseBdev1", 00:14:24.379 "uuid": "7c177b20-5b6d-5428-a9a7-9547b17a1934", 00:14:24.379 "is_configured": true, 00:14:24.379 "data_offset": 0, 00:14:24.379 "data_size": 65536 00:14:24.379 }, 00:14:24.379 { 00:14:24.379 "name": "BaseBdev2", 00:14:24.379 "uuid": "f3f7da5c-fbab-5395-a574-6589bf829662", 00:14:24.379 "is_configured": true, 00:14:24.379 "data_offset": 0, 00:14:24.379 "data_size": 65536 00:14:24.379 }, 00:14:24.379 { 00:14:24.379 "name": "BaseBdev3", 00:14:24.379 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:24.379 "is_configured": true, 00:14:24.379 "data_offset": 0, 00:14:24.379 "data_size": 65536 00:14:24.379 }, 00:14:24.379 { 00:14:24.379 "name": "BaseBdev4", 00:14:24.379 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:24.379 "is_configured": true, 00:14:24.379 "data_offset": 0, 00:14:24.379 "data_size": 65536 00:14:24.379 } 00:14:24.379 ] 00:14:24.379 }' 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.379 04:03:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:24.949 [2024-11-18 04:03:21.301978] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.949 [2024-11-18 04:03:21.401461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.949 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.950 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.950 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.950 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.950 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.950 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.950 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.950 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.950 "name": "raid_bdev1", 00:14:24.950 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:24.950 "strip_size_kb": 0, 00:14:24.950 "state": "online", 00:14:24.950 "raid_level": "raid1", 00:14:24.950 "superblock": false, 00:14:24.950 "num_base_bdevs": 4, 00:14:24.950 "num_base_bdevs_discovered": 3, 00:14:24.950 "num_base_bdevs_operational": 3, 00:14:24.950 "base_bdevs_list": [ 00:14:24.950 { 00:14:24.950 "name": null, 00:14:24.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.950 "is_configured": false, 00:14:24.950 "data_offset": 0, 00:14:24.950 "data_size": 65536 00:14:24.950 }, 00:14:24.950 { 00:14:24.950 "name": "BaseBdev2", 00:14:24.950 "uuid": "f3f7da5c-fbab-5395-a574-6589bf829662", 00:14:24.950 "is_configured": true, 00:14:24.950 "data_offset": 0, 00:14:24.950 "data_size": 65536 00:14:24.950 }, 00:14:24.950 { 00:14:24.950 "name": "BaseBdev3", 00:14:24.950 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:24.950 "is_configured": true, 00:14:24.950 "data_offset": 0, 00:14:24.950 "data_size": 65536 00:14:24.950 }, 00:14:24.950 { 00:14:24.950 "name": "BaseBdev4", 00:14:24.950 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:24.950 "is_configured": true, 00:14:24.950 "data_offset": 0, 00:14:24.950 "data_size": 65536 00:14:24.950 } 00:14:24.950 ] 00:14:24.950 }' 00:14:24.950 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.950 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.950 [2024-11-18 04:03:21.477356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:24.950 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:24.950 Zero copy mechanism will not be used. 00:14:24.950 Running I/O for 60 seconds... 00:14:25.519 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.519 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.519 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.519 [2024-11-18 04:03:21.879324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.519 04:03:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.519 04:03:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:25.520 [2024-11-18 04:03:21.950156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:25.520 [2024-11-18 04:03:21.952169] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:25.520 [2024-11-18 04:03:22.065325] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:25.520 [2024-11-18 04:03:22.066845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:25.779 [2024-11-18 04:03:22.288242] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:25.779 [2024-11-18 04:03:22.289090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:26.039 190.00 IOPS, 570.00 MiB/s [2024-11-18T04:03:22.680Z] [2024-11-18 04:03:22.613603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:26.039 [2024-11-18 04:03:22.614245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:26.299 [2024-11-18 04:03:22.738480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:26.299 04:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.299 04:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.299 04:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.299 04:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.299 04:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.299 04:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.299 04:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.299 04:03:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.299 04:03:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.560 04:03:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.560 04:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.560 "name": "raid_bdev1", 00:14:26.560 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:26.560 "strip_size_kb": 0, 00:14:26.560 "state": "online", 00:14:26.560 "raid_level": "raid1", 00:14:26.560 "superblock": false, 00:14:26.560 "num_base_bdevs": 4, 00:14:26.560 "num_base_bdevs_discovered": 4, 00:14:26.560 "num_base_bdevs_operational": 4, 00:14:26.560 "process": { 00:14:26.560 "type": "rebuild", 00:14:26.560 "target": "spare", 00:14:26.560 "progress": { 00:14:26.560 "blocks": 12288, 00:14:26.560 "percent": 18 00:14:26.560 } 00:14:26.560 }, 00:14:26.560 "base_bdevs_list": [ 00:14:26.560 { 00:14:26.560 "name": "spare", 00:14:26.560 "uuid": "527eb8b2-0cc4-567a-a660-9c036021374b", 00:14:26.560 "is_configured": true, 00:14:26.560 "data_offset": 0, 00:14:26.560 "data_size": 65536 00:14:26.560 }, 00:14:26.560 { 00:14:26.560 "name": "BaseBdev2", 00:14:26.560 "uuid": "f3f7da5c-fbab-5395-a574-6589bf829662", 00:14:26.560 "is_configured": true, 00:14:26.560 "data_offset": 0, 00:14:26.560 "data_size": 65536 00:14:26.560 }, 00:14:26.560 { 00:14:26.560 "name": "BaseBdev3", 00:14:26.560 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:26.560 "is_configured": true, 00:14:26.560 "data_offset": 0, 00:14:26.560 "data_size": 65536 00:14:26.560 }, 00:14:26.560 { 00:14:26.560 "name": "BaseBdev4", 00:14:26.560 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:26.560 "is_configured": true, 00:14:26.560 "data_offset": 0, 00:14:26.560 "data_size": 65536 00:14:26.560 } 00:14:26.560 ] 00:14:26.560 }' 00:14:26.560 04:03:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.560 [2024-11-18 04:03:22.989177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:26.560 [2024-11-18 04:03:22.990459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.560 [2024-11-18 04:03:23.089134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.560 [2024-11-18 04:03:23.093183] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:26.560 [2024-11-18 04:03:23.093496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:26.560 [2024-11-18 04:03:23.104802] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.560 [2024-11-18 04:03:23.115366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.560 [2024-11-18 04:03:23.115412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.560 [2024-11-18 04:03:23.115424] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.560 [2024-11-18 04:03:23.136116] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.560 "name": "raid_bdev1", 00:14:26.560 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:26.560 "strip_size_kb": 0, 00:14:26.560 "state": "online", 00:14:26.560 "raid_level": "raid1", 00:14:26.560 "superblock": false, 00:14:26.560 "num_base_bdevs": 4, 00:14:26.560 "num_base_bdevs_discovered": 3, 00:14:26.560 "num_base_bdevs_operational": 3, 00:14:26.560 "base_bdevs_list": [ 00:14:26.560 { 00:14:26.560 "name": null, 00:14:26.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.560 "is_configured": false, 00:14:26.560 "data_offset": 0, 00:14:26.560 "data_size": 65536 00:14:26.560 }, 00:14:26.560 { 00:14:26.560 "name": "BaseBdev2", 00:14:26.560 "uuid": "f3f7da5c-fbab-5395-a574-6589bf829662", 00:14:26.560 "is_configured": true, 00:14:26.560 "data_offset": 0, 00:14:26.560 "data_size": 65536 00:14:26.560 }, 00:14:26.560 { 00:14:26.560 "name": "BaseBdev3", 00:14:26.560 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:26.560 "is_configured": true, 00:14:26.560 "data_offset": 0, 00:14:26.560 "data_size": 65536 00:14:26.560 }, 00:14:26.560 { 00:14:26.560 "name": "BaseBdev4", 00:14:26.560 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:26.560 "is_configured": true, 00:14:26.560 "data_offset": 0, 00:14:26.560 "data_size": 65536 00:14:26.560 } 00:14:26.560 ] 00:14:26.560 }' 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.560 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.130 172.50 IOPS, 517.50 MiB/s [2024-11-18T04:03:23.771Z] 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.130 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.130 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.130 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.130 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.131 "name": "raid_bdev1", 00:14:27.131 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:27.131 "strip_size_kb": 0, 00:14:27.131 "state": "online", 00:14:27.131 "raid_level": "raid1", 00:14:27.131 "superblock": false, 00:14:27.131 "num_base_bdevs": 4, 00:14:27.131 "num_base_bdevs_discovered": 3, 00:14:27.131 "num_base_bdevs_operational": 3, 00:14:27.131 "base_bdevs_list": [ 00:14:27.131 { 00:14:27.131 "name": null, 00:14:27.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.131 "is_configured": false, 00:14:27.131 "data_offset": 0, 00:14:27.131 "data_size": 65536 00:14:27.131 }, 00:14:27.131 { 00:14:27.131 "name": "BaseBdev2", 00:14:27.131 "uuid": "f3f7da5c-fbab-5395-a574-6589bf829662", 00:14:27.131 "is_configured": true, 00:14:27.131 "data_offset": 0, 00:14:27.131 "data_size": 65536 00:14:27.131 }, 00:14:27.131 { 00:14:27.131 "name": "BaseBdev3", 00:14:27.131 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:27.131 "is_configured": true, 00:14:27.131 "data_offset": 0, 00:14:27.131 "data_size": 65536 00:14:27.131 }, 00:14:27.131 { 00:14:27.131 "name": "BaseBdev4", 00:14:27.131 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:27.131 "is_configured": true, 00:14:27.131 "data_offset": 0, 00:14:27.131 "data_size": 65536 00:14:27.131 } 00:14:27.131 ] 00:14:27.131 }' 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.131 [2024-11-18 04:03:23.726706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.131 04:03:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:27.391 [2024-11-18 04:03:23.779661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:27.391 [2024-11-18 04:03:23.781587] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.391 [2024-11-18 04:03:23.903002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:27.391 [2024-11-18 04:03:23.904276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:27.651 [2024-11-18 04:03:24.113918] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:27.651 [2024-11-18 04:03:24.114326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:27.911 [2024-11-18 04:03:24.355687] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:27.911 [2024-11-18 04:03:24.357088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:28.171 155.33 IOPS, 466.00 MiB/s [2024-11-18T04:03:24.812Z] [2024-11-18 04:03:24.567022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:28.171 [2024-11-18 04:03:24.567871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:28.171 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.171 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.171 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.171 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.171 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.171 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.171 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.171 04:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.171 04:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.171 04:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.431 "name": "raid_bdev1", 00:14:28.431 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:28.431 "strip_size_kb": 0, 00:14:28.431 "state": "online", 00:14:28.431 "raid_level": "raid1", 00:14:28.431 "superblock": false, 00:14:28.431 "num_base_bdevs": 4, 00:14:28.431 "num_base_bdevs_discovered": 4, 00:14:28.431 "num_base_bdevs_operational": 4, 00:14:28.431 "process": { 00:14:28.431 "type": "rebuild", 00:14:28.431 "target": "spare", 00:14:28.431 "progress": { 00:14:28.431 "blocks": 10240, 00:14:28.431 "percent": 15 00:14:28.431 } 00:14:28.431 }, 00:14:28.431 "base_bdevs_list": [ 00:14:28.431 { 00:14:28.431 "name": "spare", 00:14:28.431 "uuid": "527eb8b2-0cc4-567a-a660-9c036021374b", 00:14:28.431 "is_configured": true, 00:14:28.431 "data_offset": 0, 00:14:28.431 "data_size": 65536 00:14:28.431 }, 00:14:28.431 { 00:14:28.431 "name": "BaseBdev2", 00:14:28.431 "uuid": "f3f7da5c-fbab-5395-a574-6589bf829662", 00:14:28.431 "is_configured": true, 00:14:28.431 "data_offset": 0, 00:14:28.431 "data_size": 65536 00:14:28.431 }, 00:14:28.431 { 00:14:28.431 "name": "BaseBdev3", 00:14:28.431 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:28.431 "is_configured": true, 00:14:28.431 "data_offset": 0, 00:14:28.431 "data_size": 65536 00:14:28.431 }, 00:14:28.431 { 00:14:28.431 "name": "BaseBdev4", 00:14:28.431 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:28.431 "is_configured": true, 00:14:28.431 "data_offset": 0, 00:14:28.431 "data_size": 65536 00:14:28.431 } 00:14:28.431 ] 00:14:28.431 }' 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.431 04:03:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.431 [2024-11-18 04:03:24.924734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:28.431 [2024-11-18 04:03:25.005992] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:28.431 [2024-11-18 04:03:25.006377] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:28.431 [2024-11-18 04:03:25.012687] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:28.431 [2024-11-18 04:03:25.012765] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.431 04:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.691 "name": "raid_bdev1", 00:14:28.691 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:28.691 "strip_size_kb": 0, 00:14:28.691 "state": "online", 00:14:28.691 "raid_level": "raid1", 00:14:28.691 "superblock": false, 00:14:28.691 "num_base_bdevs": 4, 00:14:28.691 "num_base_bdevs_discovered": 3, 00:14:28.691 "num_base_bdevs_operational": 3, 00:14:28.691 "process": { 00:14:28.691 "type": "rebuild", 00:14:28.691 "target": "spare", 00:14:28.691 "progress": { 00:14:28.691 "blocks": 16384, 00:14:28.691 "percent": 25 00:14:28.691 } 00:14:28.691 }, 00:14:28.691 "base_bdevs_list": [ 00:14:28.691 { 00:14:28.691 "name": "spare", 00:14:28.691 "uuid": "527eb8b2-0cc4-567a-a660-9c036021374b", 00:14:28.691 "is_configured": true, 00:14:28.691 "data_offset": 0, 00:14:28.691 "data_size": 65536 00:14:28.691 }, 00:14:28.691 { 00:14:28.691 "name": null, 00:14:28.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.691 "is_configured": false, 00:14:28.691 "data_offset": 0, 00:14:28.691 "data_size": 65536 00:14:28.691 }, 00:14:28.691 { 00:14:28.691 "name": "BaseBdev3", 00:14:28.691 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:28.691 "is_configured": true, 00:14:28.691 "data_offset": 0, 00:14:28.691 "data_size": 65536 00:14:28.691 }, 00:14:28.691 { 00:14:28.691 "name": "BaseBdev4", 00:14:28.691 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:28.691 "is_configured": true, 00:14:28.691 "data_offset": 0, 00:14:28.691 "data_size": 65536 00:14:28.691 } 00:14:28.691 ] 00:14:28.691 }' 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=479 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.691 "name": "raid_bdev1", 00:14:28.691 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:28.691 "strip_size_kb": 0, 00:14:28.691 "state": "online", 00:14:28.691 "raid_level": "raid1", 00:14:28.691 "superblock": false, 00:14:28.691 "num_base_bdevs": 4, 00:14:28.691 "num_base_bdevs_discovered": 3, 00:14:28.691 "num_base_bdevs_operational": 3, 00:14:28.691 "process": { 00:14:28.691 "type": "rebuild", 00:14:28.691 "target": "spare", 00:14:28.691 "progress": { 00:14:28.691 "blocks": 18432, 00:14:28.691 "percent": 28 00:14:28.691 } 00:14:28.691 }, 00:14:28.691 "base_bdevs_list": [ 00:14:28.691 { 00:14:28.691 "name": "spare", 00:14:28.691 "uuid": "527eb8b2-0cc4-567a-a660-9c036021374b", 00:14:28.691 "is_configured": true, 00:14:28.691 "data_offset": 0, 00:14:28.691 "data_size": 65536 00:14:28.691 }, 00:14:28.691 { 00:14:28.691 "name": null, 00:14:28.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.691 "is_configured": false, 00:14:28.691 "data_offset": 0, 00:14:28.691 "data_size": 65536 00:14:28.691 }, 00:14:28.691 { 00:14:28.691 "name": "BaseBdev3", 00:14:28.691 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:28.691 "is_configured": true, 00:14:28.691 "data_offset": 0, 00:14:28.691 "data_size": 65536 00:14:28.691 }, 00:14:28.691 { 00:14:28.691 "name": "BaseBdev4", 00:14:28.691 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:28.691 "is_configured": true, 00:14:28.691 "data_offset": 0, 00:14:28.691 "data_size": 65536 00:14:28.691 } 00:14:28.691 ] 00:14:28.691 }' 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.691 04:03:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:28.951 [2024-11-18 04:03:25.406175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:29.891 145.75 IOPS, 437.25 MiB/s [2024-11-18T04:03:26.532Z] 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.891 "name": "raid_bdev1", 00:14:29.891 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:29.891 "strip_size_kb": 0, 00:14:29.891 "state": "online", 00:14:29.891 "raid_level": "raid1", 00:14:29.891 "superblock": false, 00:14:29.891 "num_base_bdevs": 4, 00:14:29.891 "num_base_bdevs_discovered": 3, 00:14:29.891 "num_base_bdevs_operational": 3, 00:14:29.891 "process": { 00:14:29.891 "type": "rebuild", 00:14:29.891 "target": "spare", 00:14:29.891 "progress": { 00:14:29.891 "blocks": 38912, 00:14:29.891 "percent": 59 00:14:29.891 } 00:14:29.891 }, 00:14:29.891 "base_bdevs_list": [ 00:14:29.891 { 00:14:29.891 "name": "spare", 00:14:29.891 "uuid": "527eb8b2-0cc4-567a-a660-9c036021374b", 00:14:29.891 "is_configured": true, 00:14:29.891 "data_offset": 0, 00:14:29.891 "data_size": 65536 00:14:29.891 }, 00:14:29.891 { 00:14:29.891 "name": null, 00:14:29.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.891 "is_configured": false, 00:14:29.891 "data_offset": 0, 00:14:29.891 "data_size": 65536 00:14:29.891 }, 00:14:29.891 { 00:14:29.891 "name": "BaseBdev3", 00:14:29.891 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:29.891 "is_configured": true, 00:14:29.891 "data_offset": 0, 00:14:29.891 "data_size": 65536 00:14:29.891 }, 00:14:29.891 { 00:14:29.891 "name": "BaseBdev4", 00:14:29.891 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:29.891 "is_configured": true, 00:14:29.891 "data_offset": 0, 00:14:29.891 "data_size": 65536 00:14:29.891 } 00:14:29.891 ] 00:14:29.891 }' 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.891 04:03:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.151 127.00 IOPS, 381.00 MiB/s [2024-11-18T04:03:26.792Z] [2024-11-18 04:03:26.559925] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:30.411 [2024-11-18 04:03:26.881391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:30.671 [2024-11-18 04:03:27.193729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:30.671 [2024-11-18 04:03:27.308006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.941 111.33 IOPS, 334.00 MiB/s [2024-11-18T04:03:27.582Z] 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.941 "name": "raid_bdev1", 00:14:30.941 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:30.941 "strip_size_kb": 0, 00:14:30.941 "state": "online", 00:14:30.941 "raid_level": "raid1", 00:14:30.941 "superblock": false, 00:14:30.941 "num_base_bdevs": 4, 00:14:30.941 "num_base_bdevs_discovered": 3, 00:14:30.941 "num_base_bdevs_operational": 3, 00:14:30.941 "process": { 00:14:30.941 "type": "rebuild", 00:14:30.941 "target": "spare", 00:14:30.941 "progress": { 00:14:30.941 "blocks": 59392, 00:14:30.941 "percent": 90 00:14:30.941 } 00:14:30.941 }, 00:14:30.941 "base_bdevs_list": [ 00:14:30.941 { 00:14:30.941 "name": "spare", 00:14:30.941 "uuid": "527eb8b2-0cc4-567a-a660-9c036021374b", 00:14:30.941 "is_configured": true, 00:14:30.941 "data_offset": 0, 00:14:30.941 "data_size": 65536 00:14:30.941 }, 00:14:30.941 { 00:14:30.941 "name": null, 00:14:30.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.941 "is_configured": false, 00:14:30.941 "data_offset": 0, 00:14:30.941 "data_size": 65536 00:14:30.941 }, 00:14:30.941 { 00:14:30.941 "name": "BaseBdev3", 00:14:30.941 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:30.941 "is_configured": true, 00:14:30.941 "data_offset": 0, 00:14:30.941 "data_size": 65536 00:14:30.941 }, 00:14:30.941 { 00:14:30.941 "name": "BaseBdev4", 00:14:30.941 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:30.941 "is_configured": true, 00:14:30.941 "data_offset": 0, 00:14:30.941 "data_size": 65536 00:14:30.941 } 00:14:30.941 ] 00:14:30.941 }' 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.941 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.214 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.214 04:03:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.214 [2024-11-18 04:03:27.744005] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:31.214 [2024-11-18 04:03:27.848738] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:31.474 [2024-11-18 04:03:27.851104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.044 99.86 IOPS, 299.57 MiB/s [2024-11-18T04:03:28.685Z] 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.044 "name": "raid_bdev1", 00:14:32.044 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:32.044 "strip_size_kb": 0, 00:14:32.044 "state": "online", 00:14:32.044 "raid_level": "raid1", 00:14:32.044 "superblock": false, 00:14:32.044 "num_base_bdevs": 4, 00:14:32.044 "num_base_bdevs_discovered": 3, 00:14:32.044 "num_base_bdevs_operational": 3, 00:14:32.044 "base_bdevs_list": [ 00:14:32.044 { 00:14:32.044 "name": "spare", 00:14:32.044 "uuid": "527eb8b2-0cc4-567a-a660-9c036021374b", 00:14:32.044 "is_configured": true, 00:14:32.044 "data_offset": 0, 00:14:32.044 "data_size": 65536 00:14:32.044 }, 00:14:32.044 { 00:14:32.044 "name": null, 00:14:32.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.044 "is_configured": false, 00:14:32.044 "data_offset": 0, 00:14:32.044 "data_size": 65536 00:14:32.044 }, 00:14:32.044 { 00:14:32.044 "name": "BaseBdev3", 00:14:32.044 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:32.044 "is_configured": true, 00:14:32.044 "data_offset": 0, 00:14:32.044 "data_size": 65536 00:14:32.044 }, 00:14:32.044 { 00:14:32.044 "name": "BaseBdev4", 00:14:32.044 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:32.044 "is_configured": true, 00:14:32.044 "data_offset": 0, 00:14:32.044 "data_size": 65536 00:14:32.044 } 00:14:32.044 ] 00:14:32.044 }' 00:14:32.044 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.304 "name": "raid_bdev1", 00:14:32.304 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:32.304 "strip_size_kb": 0, 00:14:32.304 "state": "online", 00:14:32.304 "raid_level": "raid1", 00:14:32.304 "superblock": false, 00:14:32.304 "num_base_bdevs": 4, 00:14:32.304 "num_base_bdevs_discovered": 3, 00:14:32.304 "num_base_bdevs_operational": 3, 00:14:32.304 "base_bdevs_list": [ 00:14:32.304 { 00:14:32.304 "name": "spare", 00:14:32.304 "uuid": "527eb8b2-0cc4-567a-a660-9c036021374b", 00:14:32.304 "is_configured": true, 00:14:32.304 "data_offset": 0, 00:14:32.304 "data_size": 65536 00:14:32.304 }, 00:14:32.304 { 00:14:32.304 "name": null, 00:14:32.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.304 "is_configured": false, 00:14:32.304 "data_offset": 0, 00:14:32.304 "data_size": 65536 00:14:32.304 }, 00:14:32.304 { 00:14:32.304 "name": "BaseBdev3", 00:14:32.304 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:32.304 "is_configured": true, 00:14:32.304 "data_offset": 0, 00:14:32.304 "data_size": 65536 00:14:32.304 }, 00:14:32.304 { 00:14:32.304 "name": "BaseBdev4", 00:14:32.304 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:32.304 "is_configured": true, 00:14:32.304 "data_offset": 0, 00:14:32.304 "data_size": 65536 00:14:32.304 } 00:14:32.304 ] 00:14:32.304 }' 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.304 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.304 "name": "raid_bdev1", 00:14:32.305 "uuid": "f5ff065c-3c0d-4627-b609-61a7879aeb04", 00:14:32.305 "strip_size_kb": 0, 00:14:32.305 "state": "online", 00:14:32.305 "raid_level": "raid1", 00:14:32.305 "superblock": false, 00:14:32.305 "num_base_bdevs": 4, 00:14:32.305 "num_base_bdevs_discovered": 3, 00:14:32.305 "num_base_bdevs_operational": 3, 00:14:32.305 "base_bdevs_list": [ 00:14:32.305 { 00:14:32.305 "name": "spare", 00:14:32.305 "uuid": "527eb8b2-0cc4-567a-a660-9c036021374b", 00:14:32.305 "is_configured": true, 00:14:32.305 "data_offset": 0, 00:14:32.305 "data_size": 65536 00:14:32.305 }, 00:14:32.305 { 00:14:32.305 "name": null, 00:14:32.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.305 "is_configured": false, 00:14:32.305 "data_offset": 0, 00:14:32.305 "data_size": 65536 00:14:32.305 }, 00:14:32.305 { 00:14:32.305 "name": "BaseBdev3", 00:14:32.305 "uuid": "b4dacfb4-64a7-5df8-bfb0-a152a8b9c15b", 00:14:32.305 "is_configured": true, 00:14:32.305 "data_offset": 0, 00:14:32.305 "data_size": 65536 00:14:32.305 }, 00:14:32.305 { 00:14:32.305 "name": "BaseBdev4", 00:14:32.305 "uuid": "a6057154-250b-5c81-8ddd-0f56ef0e4091", 00:14:32.305 "is_configured": true, 00:14:32.305 "data_offset": 0, 00:14:32.305 "data_size": 65536 00:14:32.305 } 00:14:32.305 ] 00:14:32.305 }' 00:14:32.305 04:03:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.305 04:03:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.874 [2024-11-18 04:03:29.276668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.874 [2024-11-18 04:03:29.276760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.874 00:14:32.874 Latency(us) 00:14:32.874 [2024-11-18T04:03:29.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.874 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:32.874 raid_bdev1 : 7.89 93.64 280.91 0.00 0.00 15277.37 284.39 116762.83 00:14:32.874 [2024-11-18T04:03:29.515Z] =================================================================================================================== 00:14:32.874 [2024-11-18T04:03:29.515Z] Total : 93.64 280.91 0.00 0.00 15277.37 284.39 116762.83 00:14:32.874 [2024-11-18 04:03:29.377459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.874 [2024-11-18 04:03:29.377541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.874 [2024-11-18 04:03:29.377675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.874 [2024-11-18 04:03:29.377730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:32.874 { 00:14:32.874 "results": [ 00:14:32.874 { 00:14:32.874 "job": "raid_bdev1", 00:14:32.874 "core_mask": "0x1", 00:14:32.874 "workload": "randrw", 00:14:32.874 "percentage": 50, 00:14:32.874 "status": "finished", 00:14:32.874 "queue_depth": 2, 00:14:32.874 "io_size": 3145728, 00:14:32.874 "runtime": 7.89228, 00:14:32.874 "iops": 93.63580612953417, 00:14:32.874 "mibps": 280.90741838860254, 00:14:32.874 "io_failed": 0, 00:14:32.874 "io_timeout": 0, 00:14:32.874 "avg_latency_us": 15277.366477772985, 00:14:32.874 "min_latency_us": 284.3947598253275, 00:14:32.874 "max_latency_us": 116762.82969432314 00:14:32.874 } 00:14:32.874 ], 00:14:32.874 "core_count": 1 00:14:32.874 } 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:32.874 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:33.134 /dev/nbd0 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.134 1+0 records in 00:14:33.134 1+0 records out 00:14:33.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577497 s, 7.1 MB/s 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:33.134 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:33.395 /dev/nbd1 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.395 1+0 records in 00:14:33.395 1+0 records out 00:14:33.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404391 s, 10.1 MB/s 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:33.395 04:03:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:33.655 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:33.655 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.655 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:33.655 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.655 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:33.655 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.655 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:33.655 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:33.916 /dev/nbd1 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:33.916 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.916 1+0 records in 00:14:33.916 1+0 records out 00:14:33.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327606 s, 12.5 MB/s 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.176 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:34.437 04:03:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78678 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78678 ']' 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78678 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.437 04:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78678 00:14:34.697 04:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.697 04:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.697 04:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78678' 00:14:34.697 killing process with pid 78678 00:14:34.697 04:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78678 00:14:34.697 Received shutdown signal, test time was about 9.638492 seconds 00:14:34.697 00:14:34.697 Latency(us) 00:14:34.697 [2024-11-18T04:03:31.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.697 [2024-11-18T04:03:31.338Z] =================================================================================================================== 00:14:34.697 [2024-11-18T04:03:31.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.697 [2024-11-18 04:03:31.099372] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.697 04:03:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78678 00:14:34.957 [2024-11-18 04:03:31.485955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.340 04:03:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:36.341 00:14:36.341 real 0m12.961s 00:14:36.341 user 0m16.414s 00:14:36.341 sys 0m1.749s 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.341 ************************************ 00:14:36.341 END TEST raid_rebuild_test_io 00:14:36.341 ************************************ 00:14:36.341 04:03:32 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:36.341 04:03:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:36.341 04:03:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.341 04:03:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.341 ************************************ 00:14:36.341 START TEST raid_rebuild_test_sb_io 00:14:36.341 ************************************ 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79076 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79076 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79076 ']' 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.341 04:03:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.341 [2024-11-18 04:03:32.742114] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:36.341 [2024-11-18 04:03:32.742322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:36.341 Zero copy mechanism will not be used. 00:14:36.341 -allocations --file-prefix=spdk_pid79076 ] 00:14:36.341 [2024-11-18 04:03:32.916729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.601 [2024-11-18 04:03:33.023398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.601 [2024-11-18 04:03:33.206719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.601 [2024-11-18 04:03:33.206868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.171 BaseBdev1_malloc 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.171 [2024-11-18 04:03:33.594240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:37.171 [2024-11-18 04:03:33.594317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.171 [2024-11-18 04:03:33.594338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:37.171 [2024-11-18 04:03:33.594348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.171 [2024-11-18 04:03:33.596368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.171 [2024-11-18 04:03:33.596484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:37.171 BaseBdev1 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.171 BaseBdev2_malloc 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.171 [2024-11-18 04:03:33.648790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:37.171 [2024-11-18 04:03:33.648853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.171 [2024-11-18 04:03:33.648870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:37.171 [2024-11-18 04:03:33.648882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.171 [2024-11-18 04:03:33.650863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.171 [2024-11-18 04:03:33.650894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:37.171 BaseBdev2 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.171 BaseBdev3_malloc 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.171 [2024-11-18 04:03:33.739181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:37.171 [2024-11-18 04:03:33.739231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.171 [2024-11-18 04:03:33.739252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:37.171 [2024-11-18 04:03:33.739261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.171 [2024-11-18 04:03:33.741259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.171 [2024-11-18 04:03:33.741299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:37.171 BaseBdev3 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.171 BaseBdev4_malloc 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.171 [2024-11-18 04:03:33.792513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:37.171 [2024-11-18 04:03:33.792566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.171 [2024-11-18 04:03:33.792586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:37.171 [2024-11-18 04:03:33.792596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.171 [2024-11-18 04:03:33.794661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.171 [2024-11-18 04:03:33.794734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:37.171 BaseBdev4 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.171 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.432 spare_malloc 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.432 spare_delay 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.432 [2024-11-18 04:03:33.859929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.432 [2024-11-18 04:03:33.860030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.432 [2024-11-18 04:03:33.860055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:37.432 [2024-11-18 04:03:33.860083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.432 [2024-11-18 04:03:33.862045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.432 [2024-11-18 04:03:33.862078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.432 spare 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.432 [2024-11-18 04:03:33.867974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.432 [2024-11-18 04:03:33.869686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.432 [2024-11-18 04:03:33.869802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.432 [2024-11-18 04:03:33.869898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:37.432 [2024-11-18 04:03:33.870106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:37.432 [2024-11-18 04:03:33.870157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:37.432 [2024-11-18 04:03:33.870397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:37.432 [2024-11-18 04:03:33.870601] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:37.432 [2024-11-18 04:03:33.870642] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:37.432 [2024-11-18 04:03:33.870821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.432 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.432 "name": "raid_bdev1", 00:14:37.432 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:37.432 "strip_size_kb": 0, 00:14:37.432 "state": "online", 00:14:37.432 "raid_level": "raid1", 00:14:37.432 "superblock": true, 00:14:37.432 "num_base_bdevs": 4, 00:14:37.432 "num_base_bdevs_discovered": 4, 00:14:37.432 "num_base_bdevs_operational": 4, 00:14:37.432 "base_bdevs_list": [ 00:14:37.432 { 00:14:37.432 "name": "BaseBdev1", 00:14:37.432 "uuid": "5247fbdb-d5a2-5d10-9d13-83a0aea7e4ad", 00:14:37.432 "is_configured": true, 00:14:37.432 "data_offset": 2048, 00:14:37.432 "data_size": 63488 00:14:37.432 }, 00:14:37.432 { 00:14:37.432 "name": "BaseBdev2", 00:14:37.432 "uuid": "38a9771d-752a-5f15-a24e-57d34ec0c5a2", 00:14:37.432 "is_configured": true, 00:14:37.432 "data_offset": 2048, 00:14:37.432 "data_size": 63488 00:14:37.432 }, 00:14:37.432 { 00:14:37.432 "name": "BaseBdev3", 00:14:37.432 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:37.432 "is_configured": true, 00:14:37.432 "data_offset": 2048, 00:14:37.433 "data_size": 63488 00:14:37.433 }, 00:14:37.433 { 00:14:37.433 "name": "BaseBdev4", 00:14:37.433 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:37.433 "is_configured": true, 00:14:37.433 "data_offset": 2048, 00:14:37.433 "data_size": 63488 00:14:37.433 } 00:14:37.433 ] 00:14:37.433 }' 00:14:37.433 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.433 04:03:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.692 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:37.692 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.692 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.693 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.693 [2024-11-18 04:03:34.299603] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.693 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 [2024-11-18 04:03:34.399101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.953 "name": "raid_bdev1", 00:14:37.953 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:37.953 "strip_size_kb": 0, 00:14:37.953 "state": "online", 00:14:37.953 "raid_level": "raid1", 00:14:37.953 "superblock": true, 00:14:37.953 "num_base_bdevs": 4, 00:14:37.953 "num_base_bdevs_discovered": 3, 00:14:37.953 "num_base_bdevs_operational": 3, 00:14:37.953 "base_bdevs_list": [ 00:14:37.953 { 00:14:37.953 "name": null, 00:14:37.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.953 "is_configured": false, 00:14:37.953 "data_offset": 0, 00:14:37.953 "data_size": 63488 00:14:37.953 }, 00:14:37.953 { 00:14:37.953 "name": "BaseBdev2", 00:14:37.953 "uuid": "38a9771d-752a-5f15-a24e-57d34ec0c5a2", 00:14:37.953 "is_configured": true, 00:14:37.953 "data_offset": 2048, 00:14:37.953 "data_size": 63488 00:14:37.953 }, 00:14:37.953 { 00:14:37.953 "name": "BaseBdev3", 00:14:37.953 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:37.953 "is_configured": true, 00:14:37.953 "data_offset": 2048, 00:14:37.953 "data_size": 63488 00:14:37.953 }, 00:14:37.953 { 00:14:37.953 "name": "BaseBdev4", 00:14:37.953 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:37.953 "is_configured": true, 00:14:37.953 "data_offset": 2048, 00:14:37.953 "data_size": 63488 00:14:37.953 } 00:14:37.953 ] 00:14:37.953 }' 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.953 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 [2024-11-18 04:03:34.494502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:37.953 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:37.953 Zero copy mechanism will not be used. 00:14:37.953 Running I/O for 60 seconds... 00:14:38.523 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.523 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.523 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.523 [2024-11-18 04:03:34.875592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.523 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.523 04:03:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:38.523 [2024-11-18 04:03:34.947090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:38.523 [2024-11-18 04:03:34.949262] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.782 [2024-11-18 04:03:35.220011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.782 [2024-11-18 04:03:35.220814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:39.042 180.00 IOPS, 540.00 MiB/s [2024-11-18T04:03:35.683Z] [2024-11-18 04:03:35.566117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:39.302 [2024-11-18 04:03:35.781038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:39.302 [2024-11-18 04:03:35.781248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:39.302 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.302 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.302 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.302 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.302 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.302 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.302 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.302 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.302 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.562 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.562 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.562 "name": "raid_bdev1", 00:14:39.562 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:39.562 "strip_size_kb": 0, 00:14:39.562 "state": "online", 00:14:39.562 "raid_level": "raid1", 00:14:39.562 "superblock": true, 00:14:39.562 "num_base_bdevs": 4, 00:14:39.562 "num_base_bdevs_discovered": 4, 00:14:39.562 "num_base_bdevs_operational": 4, 00:14:39.562 "process": { 00:14:39.562 "type": "rebuild", 00:14:39.562 "target": "spare", 00:14:39.562 "progress": { 00:14:39.562 "blocks": 12288, 00:14:39.562 "percent": 19 00:14:39.562 } 00:14:39.562 }, 00:14:39.562 "base_bdevs_list": [ 00:14:39.562 { 00:14:39.562 "name": "spare", 00:14:39.562 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:39.562 "is_configured": true, 00:14:39.562 "data_offset": 2048, 00:14:39.562 "data_size": 63488 00:14:39.562 }, 00:14:39.562 { 00:14:39.562 "name": "BaseBdev2", 00:14:39.562 "uuid": "38a9771d-752a-5f15-a24e-57d34ec0c5a2", 00:14:39.562 "is_configured": true, 00:14:39.562 "data_offset": 2048, 00:14:39.562 "data_size": 63488 00:14:39.562 }, 00:14:39.562 { 00:14:39.562 "name": "BaseBdev3", 00:14:39.562 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:39.562 "is_configured": true, 00:14:39.562 "data_offset": 2048, 00:14:39.562 "data_size": 63488 00:14:39.562 }, 00:14:39.562 { 00:14:39.562 "name": "BaseBdev4", 00:14:39.562 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:39.562 "is_configured": true, 00:14:39.562 "data_offset": 2048, 00:14:39.562 "data_size": 63488 00:14:39.562 } 00:14:39.562 ] 00:14:39.562 }' 00:14:39.562 04:03:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.562 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.562 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.562 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.562 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:39.562 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.562 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.562 [2024-11-18 04:03:36.084600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.562 [2024-11-18 04:03:36.105886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:39.562 [2024-11-18 04:03:36.106576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:39.822 [2024-11-18 04:03:36.210182] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:39.822 [2024-11-18 04:03:36.212419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.822 [2024-11-18 04:03:36.212493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.822 [2024-11-18 04:03:36.212520] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:39.822 [2024-11-18 04:03:36.233662] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.822 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.822 "name": "raid_bdev1", 00:14:39.822 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:39.822 "strip_size_kb": 0, 00:14:39.822 "state": "online", 00:14:39.822 "raid_level": "raid1", 00:14:39.823 "superblock": true, 00:14:39.823 "num_base_bdevs": 4, 00:14:39.823 "num_base_bdevs_discovered": 3, 00:14:39.823 "num_base_bdevs_operational": 3, 00:14:39.823 "base_bdevs_list": [ 00:14:39.823 { 00:14:39.823 "name": null, 00:14:39.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.823 "is_configured": false, 00:14:39.823 "data_offset": 0, 00:14:39.823 "data_size": 63488 00:14:39.823 }, 00:14:39.823 { 00:14:39.823 "name": "BaseBdev2", 00:14:39.823 "uuid": "38a9771d-752a-5f15-a24e-57d34ec0c5a2", 00:14:39.823 "is_configured": true, 00:14:39.823 "data_offset": 2048, 00:14:39.823 "data_size": 63488 00:14:39.823 }, 00:14:39.823 { 00:14:39.823 "name": "BaseBdev3", 00:14:39.823 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:39.823 "is_configured": true, 00:14:39.823 "data_offset": 2048, 00:14:39.823 "data_size": 63488 00:14:39.823 }, 00:14:39.823 { 00:14:39.823 "name": "BaseBdev4", 00:14:39.823 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:39.823 "is_configured": true, 00:14:39.823 "data_offset": 2048, 00:14:39.823 "data_size": 63488 00:14:39.823 } 00:14:39.823 ] 00:14:39.823 }' 00:14:39.823 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.823 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.095 151.00 IOPS, 453.00 MiB/s [2024-11-18T04:03:36.736Z] 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.095 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.095 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.095 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.095 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.095 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.095 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.095 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.095 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.095 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.095 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.095 "name": "raid_bdev1", 00:14:40.095 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:40.095 "strip_size_kb": 0, 00:14:40.095 "state": "online", 00:14:40.095 "raid_level": "raid1", 00:14:40.095 "superblock": true, 00:14:40.095 "num_base_bdevs": 4, 00:14:40.095 "num_base_bdevs_discovered": 3, 00:14:40.095 "num_base_bdevs_operational": 3, 00:14:40.095 "base_bdevs_list": [ 00:14:40.095 { 00:14:40.095 "name": null, 00:14:40.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.096 "is_configured": false, 00:14:40.096 "data_offset": 0, 00:14:40.096 "data_size": 63488 00:14:40.096 }, 00:14:40.096 { 00:14:40.096 "name": "BaseBdev2", 00:14:40.096 "uuid": "38a9771d-752a-5f15-a24e-57d34ec0c5a2", 00:14:40.096 "is_configured": true, 00:14:40.096 "data_offset": 2048, 00:14:40.096 "data_size": 63488 00:14:40.096 }, 00:14:40.096 { 00:14:40.096 "name": "BaseBdev3", 00:14:40.096 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:40.096 "is_configured": true, 00:14:40.096 "data_offset": 2048, 00:14:40.096 "data_size": 63488 00:14:40.096 }, 00:14:40.096 { 00:14:40.096 "name": "BaseBdev4", 00:14:40.096 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:40.096 "is_configured": true, 00:14:40.096 "data_offset": 2048, 00:14:40.096 "data_size": 63488 00:14:40.096 } 00:14:40.096 ] 00:14:40.096 }' 00:14:40.096 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.356 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.356 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.356 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.356 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.356 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.356 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.356 [2024-11-18 04:03:36.814482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.356 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.356 04:03:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:40.356 [2024-11-18 04:03:36.859614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:40.356 [2024-11-18 04:03:36.861537] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.356 [2024-11-18 04:03:36.976744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:40.356 [2024-11-18 04:03:36.977356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:40.616 [2024-11-18 04:03:37.192694] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:40.616 [2024-11-18 04:03:37.193504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:40.879 147.67 IOPS, 443.00 MiB/s [2024-11-18T04:03:37.780Z] [2024-11-18 04:03:37.519423] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:41.139 [2024-11-18 04:03:37.654312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:41.139 [2024-11-18 04:03:37.654669] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:41.399 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.400 "name": "raid_bdev1", 00:14:41.400 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:41.400 "strip_size_kb": 0, 00:14:41.400 "state": "online", 00:14:41.400 "raid_level": "raid1", 00:14:41.400 "superblock": true, 00:14:41.400 "num_base_bdevs": 4, 00:14:41.400 "num_base_bdevs_discovered": 4, 00:14:41.400 "num_base_bdevs_operational": 4, 00:14:41.400 "process": { 00:14:41.400 "type": "rebuild", 00:14:41.400 "target": "spare", 00:14:41.400 "progress": { 00:14:41.400 "blocks": 12288, 00:14:41.400 "percent": 19 00:14:41.400 } 00:14:41.400 }, 00:14:41.400 "base_bdevs_list": [ 00:14:41.400 { 00:14:41.400 "name": "spare", 00:14:41.400 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:41.400 "is_configured": true, 00:14:41.400 "data_offset": 2048, 00:14:41.400 "data_size": 63488 00:14:41.400 }, 00:14:41.400 { 00:14:41.400 "name": "BaseBdev2", 00:14:41.400 "uuid": "38a9771d-752a-5f15-a24e-57d34ec0c5a2", 00:14:41.400 "is_configured": true, 00:14:41.400 "data_offset": 2048, 00:14:41.400 "data_size": 63488 00:14:41.400 }, 00:14:41.400 { 00:14:41.400 "name": "BaseBdev3", 00:14:41.400 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:41.400 "is_configured": true, 00:14:41.400 "data_offset": 2048, 00:14:41.400 "data_size": 63488 00:14:41.400 }, 00:14:41.400 { 00:14:41.400 "name": "BaseBdev4", 00:14:41.400 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:41.400 "is_configured": true, 00:14:41.400 "data_offset": 2048, 00:14:41.400 "data_size": 63488 00:14:41.400 } 00:14:41.400 ] 00:14:41.400 }' 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.400 [2024-11-18 04:03:37.995989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:41.400 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:41.400 04:03:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.400 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.400 [2024-11-18 04:03:38.018738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.660 [2024-11-18 04:03:38.219674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:41.920 [2024-11-18 04:03:38.332018] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:41.920 [2024-11-18 04:03:38.332052] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:41.920 [2024-11-18 04:03:38.332110] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:41.920 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.920 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:41.920 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:41.920 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.920 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.920 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.920 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.920 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.920 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.921 "name": "raid_bdev1", 00:14:41.921 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:41.921 "strip_size_kb": 0, 00:14:41.921 "state": "online", 00:14:41.921 "raid_level": "raid1", 00:14:41.921 "superblock": true, 00:14:41.921 "num_base_bdevs": 4, 00:14:41.921 "num_base_bdevs_discovered": 3, 00:14:41.921 "num_base_bdevs_operational": 3, 00:14:41.921 "process": { 00:14:41.921 "type": "rebuild", 00:14:41.921 "target": "spare", 00:14:41.921 "progress": { 00:14:41.921 "blocks": 16384, 00:14:41.921 "percent": 25 00:14:41.921 } 00:14:41.921 }, 00:14:41.921 "base_bdevs_list": [ 00:14:41.921 { 00:14:41.921 "name": "spare", 00:14:41.921 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:41.921 "is_configured": true, 00:14:41.921 "data_offset": 2048, 00:14:41.921 "data_size": 63488 00:14:41.921 }, 00:14:41.921 { 00:14:41.921 "name": null, 00:14:41.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.921 "is_configured": false, 00:14:41.921 "data_offset": 0, 00:14:41.921 "data_size": 63488 00:14:41.921 }, 00:14:41.921 { 00:14:41.921 "name": "BaseBdev3", 00:14:41.921 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:41.921 "is_configured": true, 00:14:41.921 "data_offset": 2048, 00:14:41.921 "data_size": 63488 00:14:41.921 }, 00:14:41.921 { 00:14:41.921 "name": "BaseBdev4", 00:14:41.921 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:41.921 "is_configured": true, 00:14:41.921 "data_offset": 2048, 00:14:41.921 "data_size": 63488 00:14:41.921 } 00:14:41.921 ] 00:14:41.921 }' 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=492 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.921 134.25 IOPS, 402.75 MiB/s [2024-11-18T04:03:38.562Z] 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.921 "name": "raid_bdev1", 00:14:41.921 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:41.921 "strip_size_kb": 0, 00:14:41.921 "state": "online", 00:14:41.921 "raid_level": "raid1", 00:14:41.921 "superblock": true, 00:14:41.921 "num_base_bdevs": 4, 00:14:41.921 "num_base_bdevs_discovered": 3, 00:14:41.921 "num_base_bdevs_operational": 3, 00:14:41.921 "process": { 00:14:41.921 "type": "rebuild", 00:14:41.921 "target": "spare", 00:14:41.921 "progress": { 00:14:41.921 "blocks": 18432, 00:14:41.921 "percent": 29 00:14:41.921 } 00:14:41.921 }, 00:14:41.921 "base_bdevs_list": [ 00:14:41.921 { 00:14:41.921 "name": "spare", 00:14:41.921 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:41.921 "is_configured": true, 00:14:41.921 "data_offset": 2048, 00:14:41.921 "data_size": 63488 00:14:41.921 }, 00:14:41.921 { 00:14:41.921 "name": null, 00:14:41.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.921 "is_configured": false, 00:14:41.921 "data_offset": 0, 00:14:41.921 "data_size": 63488 00:14:41.921 }, 00:14:41.921 { 00:14:41.921 "name": "BaseBdev3", 00:14:41.921 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:41.921 "is_configured": true, 00:14:41.921 "data_offset": 2048, 00:14:41.921 "data_size": 63488 00:14:41.921 }, 00:14:41.921 { 00:14:41.921 "name": "BaseBdev4", 00:14:41.921 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:41.921 "is_configured": true, 00:14:41.921 "data_offset": 2048, 00:14:41.921 "data_size": 63488 00:14:41.921 } 00:14:41.921 ] 00:14:41.921 }' 00:14:41.921 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.182 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.182 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.182 [2024-11-18 04:03:38.585190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:42.182 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.182 04:03:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.182 [2024-11-18 04:03:38.792931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:42.182 [2024-11-18 04:03:38.793514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:42.764 [2024-11-18 04:03:39.116490] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:43.030 [2024-11-18 04:03:39.463449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:43.030 117.60 IOPS, 352.80 MiB/s [2024-11-18T04:03:39.671Z] 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.030 "name": "raid_bdev1", 00:14:43.030 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:43.030 "strip_size_kb": 0, 00:14:43.030 "state": "online", 00:14:43.030 "raid_level": "raid1", 00:14:43.030 "superblock": true, 00:14:43.030 "num_base_bdevs": 4, 00:14:43.030 "num_base_bdevs_discovered": 3, 00:14:43.030 "num_base_bdevs_operational": 3, 00:14:43.030 "process": { 00:14:43.030 "type": "rebuild", 00:14:43.030 "target": "spare", 00:14:43.030 "progress": { 00:14:43.030 "blocks": 32768, 00:14:43.030 "percent": 51 00:14:43.030 } 00:14:43.030 }, 00:14:43.030 "base_bdevs_list": [ 00:14:43.030 { 00:14:43.030 "name": "spare", 00:14:43.030 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:43.030 "is_configured": true, 00:14:43.030 "data_offset": 2048, 00:14:43.030 "data_size": 63488 00:14:43.030 }, 00:14:43.030 { 00:14:43.030 "name": null, 00:14:43.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.030 "is_configured": false, 00:14:43.030 "data_offset": 0, 00:14:43.030 "data_size": 63488 00:14:43.030 }, 00:14:43.030 { 00:14:43.030 "name": "BaseBdev3", 00:14:43.030 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:43.030 "is_configured": true, 00:14:43.030 "data_offset": 2048, 00:14:43.030 "data_size": 63488 00:14:43.030 }, 00:14:43.030 { 00:14:43.030 "name": "BaseBdev4", 00:14:43.030 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:43.030 "is_configured": true, 00:14:43.030 "data_offset": 2048, 00:14:43.030 "data_size": 63488 00:14:43.030 } 00:14:43.030 ] 00:14:43.030 }' 00:14:43.030 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.289 [2024-11-18 04:03:39.686893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:43.289 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.289 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.289 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.289 04:03:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.858 [2024-11-18 04:03:40.245888] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:43.858 [2024-11-18 04:03:40.352545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:43.858 [2024-11-18 04:03:40.352738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:44.118 104.33 IOPS, 313.00 MiB/s [2024-11-18T04:03:40.759Z] [2024-11-18 04:03:40.675083] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:44.118 [2024-11-18 04:03:40.675957] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.378 "name": "raid_bdev1", 00:14:44.378 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:44.378 "strip_size_kb": 0, 00:14:44.378 "state": "online", 00:14:44.378 "raid_level": "raid1", 00:14:44.378 "superblock": true, 00:14:44.378 "num_base_bdevs": 4, 00:14:44.378 "num_base_bdevs_discovered": 3, 00:14:44.378 "num_base_bdevs_operational": 3, 00:14:44.378 "process": { 00:14:44.378 "type": "rebuild", 00:14:44.378 "target": "spare", 00:14:44.378 "progress": { 00:14:44.378 "blocks": 51200, 00:14:44.378 "percent": 80 00:14:44.378 } 00:14:44.378 }, 00:14:44.378 "base_bdevs_list": [ 00:14:44.378 { 00:14:44.378 "name": "spare", 00:14:44.378 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:44.378 "is_configured": true, 00:14:44.378 "data_offset": 2048, 00:14:44.378 "data_size": 63488 00:14:44.378 }, 00:14:44.378 { 00:14:44.378 "name": null, 00:14:44.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.378 "is_configured": false, 00:14:44.378 "data_offset": 0, 00:14:44.378 "data_size": 63488 00:14:44.378 }, 00:14:44.378 { 00:14:44.378 "name": "BaseBdev3", 00:14:44.378 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:44.378 "is_configured": true, 00:14:44.378 "data_offset": 2048, 00:14:44.378 "data_size": 63488 00:14:44.378 }, 00:14:44.378 { 00:14:44.378 "name": "BaseBdev4", 00:14:44.378 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:44.378 "is_configured": true, 00:14:44.378 "data_offset": 2048, 00:14:44.378 "data_size": 63488 00:14:44.378 } 00:14:44.378 ] 00:14:44.378 }' 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.378 04:03:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.378 [2024-11-18 04:03:40.892494] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:44.638 [2024-11-18 04:03:41.105374] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:44.897 [2024-11-18 04:03:41.327045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:45.157 93.29 IOPS, 279.86 MiB/s [2024-11-18T04:03:41.798Z] [2024-11-18 04:03:41.655732] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:45.157 [2024-11-18 04:03:41.755588] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:45.157 [2024-11-18 04:03:41.757376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.417 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.417 "name": "raid_bdev1", 00:14:45.417 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:45.417 "strip_size_kb": 0, 00:14:45.417 "state": "online", 00:14:45.417 "raid_level": "raid1", 00:14:45.417 "superblock": true, 00:14:45.417 "num_base_bdevs": 4, 00:14:45.417 "num_base_bdevs_discovered": 3, 00:14:45.417 "num_base_bdevs_operational": 3, 00:14:45.417 "base_bdevs_list": [ 00:14:45.417 { 00:14:45.417 "name": "spare", 00:14:45.417 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:45.417 "is_configured": true, 00:14:45.417 "data_offset": 2048, 00:14:45.417 "data_size": 63488 00:14:45.417 }, 00:14:45.417 { 00:14:45.417 "name": null, 00:14:45.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.417 "is_configured": false, 00:14:45.417 "data_offset": 0, 00:14:45.417 "data_size": 63488 00:14:45.417 }, 00:14:45.417 { 00:14:45.417 "name": "BaseBdev3", 00:14:45.417 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:45.417 "is_configured": true, 00:14:45.417 "data_offset": 2048, 00:14:45.417 "data_size": 63488 00:14:45.417 }, 00:14:45.417 { 00:14:45.417 "name": "BaseBdev4", 00:14:45.417 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:45.417 "is_configured": true, 00:14:45.417 "data_offset": 2048, 00:14:45.417 "data_size": 63488 00:14:45.417 } 00:14:45.418 ] 00:14:45.418 }' 00:14:45.418 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.418 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:45.418 04:03:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.418 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:45.418 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:45.418 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.418 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.418 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.418 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.418 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.418 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.418 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.418 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.418 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.678 "name": "raid_bdev1", 00:14:45.678 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:45.678 "strip_size_kb": 0, 00:14:45.678 "state": "online", 00:14:45.678 "raid_level": "raid1", 00:14:45.678 "superblock": true, 00:14:45.678 "num_base_bdevs": 4, 00:14:45.678 "num_base_bdevs_discovered": 3, 00:14:45.678 "num_base_bdevs_operational": 3, 00:14:45.678 "base_bdevs_list": [ 00:14:45.678 { 00:14:45.678 "name": "spare", 00:14:45.678 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:45.678 "is_configured": true, 00:14:45.678 "data_offset": 2048, 00:14:45.678 "data_size": 63488 00:14:45.678 }, 00:14:45.678 { 00:14:45.678 "name": null, 00:14:45.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.678 "is_configured": false, 00:14:45.678 "data_offset": 0, 00:14:45.678 "data_size": 63488 00:14:45.678 }, 00:14:45.678 { 00:14:45.678 "name": "BaseBdev3", 00:14:45.678 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:45.678 "is_configured": true, 00:14:45.678 "data_offset": 2048, 00:14:45.678 "data_size": 63488 00:14:45.678 }, 00:14:45.678 { 00:14:45.678 "name": "BaseBdev4", 00:14:45.678 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:45.678 "is_configured": true, 00:14:45.678 "data_offset": 2048, 00:14:45.678 "data_size": 63488 00:14:45.678 } 00:14:45.678 ] 00:14:45.678 }' 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.678 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.678 "name": "raid_bdev1", 00:14:45.678 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:45.678 "strip_size_kb": 0, 00:14:45.678 "state": "online", 00:14:45.678 "raid_level": "raid1", 00:14:45.678 "superblock": true, 00:14:45.678 "num_base_bdevs": 4, 00:14:45.678 "num_base_bdevs_discovered": 3, 00:14:45.678 "num_base_bdevs_operational": 3, 00:14:45.678 "base_bdevs_list": [ 00:14:45.678 { 00:14:45.678 "name": "spare", 00:14:45.678 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:45.678 "is_configured": true, 00:14:45.678 "data_offset": 2048, 00:14:45.678 "data_size": 63488 00:14:45.678 }, 00:14:45.678 { 00:14:45.678 "name": null, 00:14:45.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.678 "is_configured": false, 00:14:45.678 "data_offset": 0, 00:14:45.678 "data_size": 63488 00:14:45.678 }, 00:14:45.678 { 00:14:45.678 "name": "BaseBdev3", 00:14:45.678 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:45.678 "is_configured": true, 00:14:45.678 "data_offset": 2048, 00:14:45.678 "data_size": 63488 00:14:45.678 }, 00:14:45.678 { 00:14:45.678 "name": "BaseBdev4", 00:14:45.678 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:45.678 "is_configured": true, 00:14:45.678 "data_offset": 2048, 00:14:45.678 "data_size": 63488 00:14:45.678 } 00:14:45.679 ] 00:14:45.679 }' 00:14:45.679 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.679 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.198 85.75 IOPS, 257.25 MiB/s [2024-11-18T04:03:42.839Z] 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.198 [2024-11-18 04:03:42.654999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.198 [2024-11-18 04:03:42.655090] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.198 00:14:46.198 Latency(us) 00:14:46.198 [2024-11-18T04:03:42.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.198 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:46.198 raid_bdev1 : 8.27 84.44 253.32 0.00 0.00 14580.17 313.01 115389.15 00:14:46.198 [2024-11-18T04:03:42.839Z] =================================================================================================================== 00:14:46.198 [2024-11-18T04:03:42.839Z] Total : 84.44 253.32 0.00 0.00 14580.17 313.01 115389.15 00:14:46.198 [2024-11-18 04:03:42.767681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.198 { 00:14:46.198 "results": [ 00:14:46.198 { 00:14:46.198 "job": "raid_bdev1", 00:14:46.198 "core_mask": "0x1", 00:14:46.198 "workload": "randrw", 00:14:46.198 "percentage": 50, 00:14:46.198 "status": "finished", 00:14:46.198 "queue_depth": 2, 00:14:46.198 "io_size": 3145728, 00:14:46.198 "runtime": 8.266118, 00:14:46.198 "iops": 84.4410883077159, 00:14:46.198 "mibps": 253.3232649231477, 00:14:46.198 "io_failed": 0, 00:14:46.198 "io_timeout": 0, 00:14:46.198 "avg_latency_us": 14580.170718584603, 00:14:46.198 "min_latency_us": 313.0131004366812, 00:14:46.198 "max_latency_us": 115389.14934497817 00:14:46.198 } 00:14:46.198 ], 00:14:46.198 "core_count": 1 00:14:46.198 } 00:14:46.198 [2024-11-18 04:03:42.767790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.198 [2024-11-18 04:03:42.767923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.198 [2024-11-18 04:03:42.767935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.198 04:03:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:46.459 /dev/nbd0 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.459 1+0 records in 00:14:46.459 1+0 records out 00:14:46.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567976 s, 7.2 MB/s 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.459 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:46.719 /dev/nbd1 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.719 1+0 records in 00:14:46.719 1+0 records out 00:14:46.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441531 s, 9.3 MB/s 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.719 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:46.978 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:46.978 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.978 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:46.978 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.978 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:46.978 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.978 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.238 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:47.498 /dev/nbd1 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.498 1+0 records in 00:14:47.498 1+0 records out 00:14:47.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366487 s, 11.2 MB/s 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.498 04:03:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:47.498 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:47.498 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.498 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:47.498 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:47.498 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:47.498 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.498 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.758 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.018 [2024-11-18 04:03:44.435352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:48.018 [2024-11-18 04:03:44.435405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.018 [2024-11-18 04:03:44.435442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:48.018 [2024-11-18 04:03:44.435451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.018 [2024-11-18 04:03:44.437708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.018 [2024-11-18 04:03:44.437796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:48.018 [2024-11-18 04:03:44.437913] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:48.018 [2024-11-18 04:03:44.437998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.018 [2024-11-18 04:03:44.438177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.018 [2024-11-18 04:03:44.438313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:48.018 spare 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.018 [2024-11-18 04:03:44.538229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:48.018 [2024-11-18 04:03:44.538251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:48.018 [2024-11-18 04:03:44.538497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:48.018 [2024-11-18 04:03:44.538628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:48.018 [2024-11-18 04:03:44.538638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:48.018 [2024-11-18 04:03:44.538791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.018 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.018 "name": "raid_bdev1", 00:14:48.018 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:48.018 "strip_size_kb": 0, 00:14:48.018 "state": "online", 00:14:48.018 "raid_level": "raid1", 00:14:48.018 "superblock": true, 00:14:48.018 "num_base_bdevs": 4, 00:14:48.018 "num_base_bdevs_discovered": 3, 00:14:48.018 "num_base_bdevs_operational": 3, 00:14:48.018 "base_bdevs_list": [ 00:14:48.018 { 00:14:48.018 "name": "spare", 00:14:48.019 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:48.019 "is_configured": true, 00:14:48.019 "data_offset": 2048, 00:14:48.019 "data_size": 63488 00:14:48.019 }, 00:14:48.019 { 00:14:48.019 "name": null, 00:14:48.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.019 "is_configured": false, 00:14:48.019 "data_offset": 2048, 00:14:48.019 "data_size": 63488 00:14:48.019 }, 00:14:48.019 { 00:14:48.019 "name": "BaseBdev3", 00:14:48.019 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:48.019 "is_configured": true, 00:14:48.019 "data_offset": 2048, 00:14:48.019 "data_size": 63488 00:14:48.019 }, 00:14:48.019 { 00:14:48.019 "name": "BaseBdev4", 00:14:48.019 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:48.019 "is_configured": true, 00:14:48.019 "data_offset": 2048, 00:14:48.019 "data_size": 63488 00:14:48.019 } 00:14:48.019 ] 00:14:48.019 }' 00:14:48.019 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.019 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.589 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.589 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.589 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.589 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.589 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.589 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.589 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.589 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.589 04:03:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.589 "name": "raid_bdev1", 00:14:48.589 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:48.589 "strip_size_kb": 0, 00:14:48.589 "state": "online", 00:14:48.589 "raid_level": "raid1", 00:14:48.589 "superblock": true, 00:14:48.589 "num_base_bdevs": 4, 00:14:48.589 "num_base_bdevs_discovered": 3, 00:14:48.589 "num_base_bdevs_operational": 3, 00:14:48.589 "base_bdevs_list": [ 00:14:48.589 { 00:14:48.589 "name": "spare", 00:14:48.589 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:48.589 "is_configured": true, 00:14:48.589 "data_offset": 2048, 00:14:48.589 "data_size": 63488 00:14:48.589 }, 00:14:48.589 { 00:14:48.589 "name": null, 00:14:48.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.589 "is_configured": false, 00:14:48.589 "data_offset": 2048, 00:14:48.589 "data_size": 63488 00:14:48.589 }, 00:14:48.589 { 00:14:48.589 "name": "BaseBdev3", 00:14:48.589 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:48.589 "is_configured": true, 00:14:48.589 "data_offset": 2048, 00:14:48.589 "data_size": 63488 00:14:48.589 }, 00:14:48.589 { 00:14:48.589 "name": "BaseBdev4", 00:14:48.589 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:48.589 "is_configured": true, 00:14:48.589 "data_offset": 2048, 00:14:48.589 "data_size": 63488 00:14:48.589 } 00:14:48.589 ] 00:14:48.589 }' 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.589 [2024-11-18 04:03:45.190176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.589 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.849 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.849 "name": "raid_bdev1", 00:14:48.849 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:48.849 "strip_size_kb": 0, 00:14:48.849 "state": "online", 00:14:48.849 "raid_level": "raid1", 00:14:48.849 "superblock": true, 00:14:48.849 "num_base_bdevs": 4, 00:14:48.849 "num_base_bdevs_discovered": 2, 00:14:48.849 "num_base_bdevs_operational": 2, 00:14:48.849 "base_bdevs_list": [ 00:14:48.849 { 00:14:48.849 "name": null, 00:14:48.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.849 "is_configured": false, 00:14:48.849 "data_offset": 0, 00:14:48.849 "data_size": 63488 00:14:48.849 }, 00:14:48.849 { 00:14:48.849 "name": null, 00:14:48.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.849 "is_configured": false, 00:14:48.849 "data_offset": 2048, 00:14:48.849 "data_size": 63488 00:14:48.849 }, 00:14:48.849 { 00:14:48.849 "name": "BaseBdev3", 00:14:48.849 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:48.849 "is_configured": true, 00:14:48.849 "data_offset": 2048, 00:14:48.849 "data_size": 63488 00:14:48.849 }, 00:14:48.849 { 00:14:48.849 "name": "BaseBdev4", 00:14:48.849 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:48.849 "is_configured": true, 00:14:48.849 "data_offset": 2048, 00:14:48.849 "data_size": 63488 00:14:48.849 } 00:14:48.849 ] 00:14:48.849 }' 00:14:48.849 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.849 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.109 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.109 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.109 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.109 [2024-11-18 04:03:45.625492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.109 [2024-11-18 04:03:45.625716] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:49.109 [2024-11-18 04:03:45.625735] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:49.109 [2024-11-18 04:03:45.625775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.109 [2024-11-18 04:03:45.639491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:49.109 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.109 04:03:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:49.109 [2024-11-18 04:03:45.641264] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.048 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.048 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.048 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.048 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.048 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.048 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.048 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.048 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.048 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.048 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.308 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.308 "name": "raid_bdev1", 00:14:50.308 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:50.308 "strip_size_kb": 0, 00:14:50.308 "state": "online", 00:14:50.309 "raid_level": "raid1", 00:14:50.309 "superblock": true, 00:14:50.309 "num_base_bdevs": 4, 00:14:50.309 "num_base_bdevs_discovered": 3, 00:14:50.309 "num_base_bdevs_operational": 3, 00:14:50.309 "process": { 00:14:50.309 "type": "rebuild", 00:14:50.309 "target": "spare", 00:14:50.309 "progress": { 00:14:50.309 "blocks": 20480, 00:14:50.309 "percent": 32 00:14:50.309 } 00:14:50.309 }, 00:14:50.309 "base_bdevs_list": [ 00:14:50.309 { 00:14:50.309 "name": "spare", 00:14:50.309 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:50.309 "is_configured": true, 00:14:50.309 "data_offset": 2048, 00:14:50.309 "data_size": 63488 00:14:50.309 }, 00:14:50.309 { 00:14:50.309 "name": null, 00:14:50.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.309 "is_configured": false, 00:14:50.309 "data_offset": 2048, 00:14:50.309 "data_size": 63488 00:14:50.309 }, 00:14:50.309 { 00:14:50.309 "name": "BaseBdev3", 00:14:50.309 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:50.309 "is_configured": true, 00:14:50.309 "data_offset": 2048, 00:14:50.309 "data_size": 63488 00:14:50.309 }, 00:14:50.309 { 00:14:50.309 "name": "BaseBdev4", 00:14:50.309 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:50.309 "is_configured": true, 00:14:50.309 "data_offset": 2048, 00:14:50.309 "data_size": 63488 00:14:50.309 } 00:14:50.309 ] 00:14:50.309 }' 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.309 [2024-11-18 04:03:46.792981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.309 [2024-11-18 04:03:46.845793] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:50.309 [2024-11-18 04:03:46.845859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.309 [2024-11-18 04:03:46.845892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.309 [2024-11-18 04:03:46.845899] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.309 "name": "raid_bdev1", 00:14:50.309 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:50.309 "strip_size_kb": 0, 00:14:50.309 "state": "online", 00:14:50.309 "raid_level": "raid1", 00:14:50.309 "superblock": true, 00:14:50.309 "num_base_bdevs": 4, 00:14:50.309 "num_base_bdevs_discovered": 2, 00:14:50.309 "num_base_bdevs_operational": 2, 00:14:50.309 "base_bdevs_list": [ 00:14:50.309 { 00:14:50.309 "name": null, 00:14:50.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.309 "is_configured": false, 00:14:50.309 "data_offset": 0, 00:14:50.309 "data_size": 63488 00:14:50.309 }, 00:14:50.309 { 00:14:50.309 "name": null, 00:14:50.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.309 "is_configured": false, 00:14:50.309 "data_offset": 2048, 00:14:50.309 "data_size": 63488 00:14:50.309 }, 00:14:50.309 { 00:14:50.309 "name": "BaseBdev3", 00:14:50.309 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:50.309 "is_configured": true, 00:14:50.309 "data_offset": 2048, 00:14:50.309 "data_size": 63488 00:14:50.309 }, 00:14:50.309 { 00:14:50.309 "name": "BaseBdev4", 00:14:50.309 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:50.309 "is_configured": true, 00:14:50.309 "data_offset": 2048, 00:14:50.309 "data_size": 63488 00:14:50.309 } 00:14:50.309 ] 00:14:50.309 }' 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.309 04:03:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.879 04:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:50.879 04:03:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.879 04:03:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.879 [2024-11-18 04:03:47.301794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:50.879 [2024-11-18 04:03:47.301919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.879 [2024-11-18 04:03:47.301962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:50.879 [2024-11-18 04:03:47.301992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.879 [2024-11-18 04:03:47.302488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.879 [2024-11-18 04:03:47.302543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:50.879 [2024-11-18 04:03:47.302660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:50.879 [2024-11-18 04:03:47.302698] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:50.879 [2024-11-18 04:03:47.302745] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:50.879 [2024-11-18 04:03:47.302789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.879 [2024-11-18 04:03:47.317298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:50.879 spare 00:14:50.879 04:03:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.879 04:03:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:50.879 [2024-11-18 04:03:47.319144] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.819 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.819 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.819 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.819 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.819 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.819 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.819 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.819 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.819 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.819 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.819 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.819 "name": "raid_bdev1", 00:14:51.819 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:51.820 "strip_size_kb": 0, 00:14:51.820 "state": "online", 00:14:51.820 "raid_level": "raid1", 00:14:51.820 "superblock": true, 00:14:51.820 "num_base_bdevs": 4, 00:14:51.820 "num_base_bdevs_discovered": 3, 00:14:51.820 "num_base_bdevs_operational": 3, 00:14:51.820 "process": { 00:14:51.820 "type": "rebuild", 00:14:51.820 "target": "spare", 00:14:51.820 "progress": { 00:14:51.820 "blocks": 20480, 00:14:51.820 "percent": 32 00:14:51.820 } 00:14:51.820 }, 00:14:51.820 "base_bdevs_list": [ 00:14:51.820 { 00:14:51.820 "name": "spare", 00:14:51.820 "uuid": "3f18bc45-54e6-573b-a679-ed008bdf52ff", 00:14:51.820 "is_configured": true, 00:14:51.820 "data_offset": 2048, 00:14:51.820 "data_size": 63488 00:14:51.820 }, 00:14:51.820 { 00:14:51.820 "name": null, 00:14:51.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.820 "is_configured": false, 00:14:51.820 "data_offset": 2048, 00:14:51.820 "data_size": 63488 00:14:51.820 }, 00:14:51.820 { 00:14:51.820 "name": "BaseBdev3", 00:14:51.820 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:51.820 "is_configured": true, 00:14:51.820 "data_offset": 2048, 00:14:51.820 "data_size": 63488 00:14:51.820 }, 00:14:51.820 { 00:14:51.820 "name": "BaseBdev4", 00:14:51.820 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:51.820 "is_configured": true, 00:14:51.820 "data_offset": 2048, 00:14:51.820 "data_size": 63488 00:14:51.820 } 00:14:51.820 ] 00:14:51.820 }' 00:14:51.820 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.820 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.820 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.080 [2024-11-18 04:03:48.487326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:52.080 [2024-11-18 04:03:48.523663] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:52.080 [2024-11-18 04:03:48.523765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.080 [2024-11-18 04:03:48.523791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:52.080 [2024-11-18 04:03:48.523800] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.080 "name": "raid_bdev1", 00:14:52.080 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:52.080 "strip_size_kb": 0, 00:14:52.080 "state": "online", 00:14:52.080 "raid_level": "raid1", 00:14:52.080 "superblock": true, 00:14:52.080 "num_base_bdevs": 4, 00:14:52.080 "num_base_bdevs_discovered": 2, 00:14:52.080 "num_base_bdevs_operational": 2, 00:14:52.080 "base_bdevs_list": [ 00:14:52.080 { 00:14:52.080 "name": null, 00:14:52.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.080 "is_configured": false, 00:14:52.080 "data_offset": 0, 00:14:52.080 "data_size": 63488 00:14:52.080 }, 00:14:52.080 { 00:14:52.080 "name": null, 00:14:52.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.080 "is_configured": false, 00:14:52.080 "data_offset": 2048, 00:14:52.080 "data_size": 63488 00:14:52.080 }, 00:14:52.080 { 00:14:52.080 "name": "BaseBdev3", 00:14:52.080 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:52.080 "is_configured": true, 00:14:52.080 "data_offset": 2048, 00:14:52.080 "data_size": 63488 00:14:52.080 }, 00:14:52.080 { 00:14:52.080 "name": "BaseBdev4", 00:14:52.080 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:52.080 "is_configured": true, 00:14:52.080 "data_offset": 2048, 00:14:52.080 "data_size": 63488 00:14:52.080 } 00:14:52.080 ] 00:14:52.080 }' 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.080 04:03:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.650 "name": "raid_bdev1", 00:14:52.650 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:52.650 "strip_size_kb": 0, 00:14:52.650 "state": "online", 00:14:52.650 "raid_level": "raid1", 00:14:52.650 "superblock": true, 00:14:52.650 "num_base_bdevs": 4, 00:14:52.650 "num_base_bdevs_discovered": 2, 00:14:52.650 "num_base_bdevs_operational": 2, 00:14:52.650 "base_bdevs_list": [ 00:14:52.650 { 00:14:52.650 "name": null, 00:14:52.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.650 "is_configured": false, 00:14:52.650 "data_offset": 0, 00:14:52.650 "data_size": 63488 00:14:52.650 }, 00:14:52.650 { 00:14:52.650 "name": null, 00:14:52.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.650 "is_configured": false, 00:14:52.650 "data_offset": 2048, 00:14:52.650 "data_size": 63488 00:14:52.650 }, 00:14:52.650 { 00:14:52.650 "name": "BaseBdev3", 00:14:52.650 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:52.650 "is_configured": true, 00:14:52.650 "data_offset": 2048, 00:14:52.650 "data_size": 63488 00:14:52.650 }, 00:14:52.650 { 00:14:52.650 "name": "BaseBdev4", 00:14:52.650 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:52.650 "is_configured": true, 00:14:52.650 "data_offset": 2048, 00:14:52.650 "data_size": 63488 00:14:52.650 } 00:14:52.650 ] 00:14:52.650 }' 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.650 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.651 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.651 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:52.651 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.651 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.651 [2024-11-18 04:03:49.162714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:52.651 [2024-11-18 04:03:49.162786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.651 [2024-11-18 04:03:49.162805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:52.651 [2024-11-18 04:03:49.162816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.651 [2024-11-18 04:03:49.163276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.651 [2024-11-18 04:03:49.163335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:52.651 [2024-11-18 04:03:49.163439] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:52.651 [2024-11-18 04:03:49.163487] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:52.651 [2024-11-18 04:03:49.163497] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:52.651 [2024-11-18 04:03:49.163508] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:52.651 BaseBdev1 00:14:52.651 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.651 04:03:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.590 "name": "raid_bdev1", 00:14:53.590 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:53.590 "strip_size_kb": 0, 00:14:53.590 "state": "online", 00:14:53.590 "raid_level": "raid1", 00:14:53.590 "superblock": true, 00:14:53.590 "num_base_bdevs": 4, 00:14:53.590 "num_base_bdevs_discovered": 2, 00:14:53.590 "num_base_bdevs_operational": 2, 00:14:53.590 "base_bdevs_list": [ 00:14:53.590 { 00:14:53.590 "name": null, 00:14:53.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.590 "is_configured": false, 00:14:53.590 "data_offset": 0, 00:14:53.590 "data_size": 63488 00:14:53.590 }, 00:14:53.590 { 00:14:53.590 "name": null, 00:14:53.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.590 "is_configured": false, 00:14:53.590 "data_offset": 2048, 00:14:53.590 "data_size": 63488 00:14:53.590 }, 00:14:53.590 { 00:14:53.590 "name": "BaseBdev3", 00:14:53.590 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:53.590 "is_configured": true, 00:14:53.590 "data_offset": 2048, 00:14:53.590 "data_size": 63488 00:14:53.590 }, 00:14:53.590 { 00:14:53.590 "name": "BaseBdev4", 00:14:53.590 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:53.590 "is_configured": true, 00:14:53.590 "data_offset": 2048, 00:14:53.590 "data_size": 63488 00:14:53.590 } 00:14:53.590 ] 00:14:53.590 }' 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.590 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.161 "name": "raid_bdev1", 00:14:54.161 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:54.161 "strip_size_kb": 0, 00:14:54.161 "state": "online", 00:14:54.161 "raid_level": "raid1", 00:14:54.161 "superblock": true, 00:14:54.161 "num_base_bdevs": 4, 00:14:54.161 "num_base_bdevs_discovered": 2, 00:14:54.161 "num_base_bdevs_operational": 2, 00:14:54.161 "base_bdevs_list": [ 00:14:54.161 { 00:14:54.161 "name": null, 00:14:54.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.161 "is_configured": false, 00:14:54.161 "data_offset": 0, 00:14:54.161 "data_size": 63488 00:14:54.161 }, 00:14:54.161 { 00:14:54.161 "name": null, 00:14:54.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.161 "is_configured": false, 00:14:54.161 "data_offset": 2048, 00:14:54.161 "data_size": 63488 00:14:54.161 }, 00:14:54.161 { 00:14:54.161 "name": "BaseBdev3", 00:14:54.161 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:54.161 "is_configured": true, 00:14:54.161 "data_offset": 2048, 00:14:54.161 "data_size": 63488 00:14:54.161 }, 00:14:54.161 { 00:14:54.161 "name": "BaseBdev4", 00:14:54.161 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:54.161 "is_configured": true, 00:14:54.161 "data_offset": 2048, 00:14:54.161 "data_size": 63488 00:14:54.161 } 00:14:54.161 ] 00:14:54.161 }' 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.161 [2024-11-18 04:03:50.780125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.161 [2024-11-18 04:03:50.780330] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:54.161 [2024-11-18 04:03:50.780346] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:54.161 request: 00:14:54.161 { 00:14:54.161 "base_bdev": "BaseBdev1", 00:14:54.161 "raid_bdev": "raid_bdev1", 00:14:54.161 "method": "bdev_raid_add_base_bdev", 00:14:54.161 "req_id": 1 00:14:54.161 } 00:14:54.161 Got JSON-RPC error response 00:14:54.161 response: 00:14:54.161 { 00:14:54.161 "code": -22, 00:14:54.161 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:54.161 } 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:54.161 04:03:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.545 "name": "raid_bdev1", 00:14:55.545 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:55.545 "strip_size_kb": 0, 00:14:55.545 "state": "online", 00:14:55.545 "raid_level": "raid1", 00:14:55.545 "superblock": true, 00:14:55.545 "num_base_bdevs": 4, 00:14:55.545 "num_base_bdevs_discovered": 2, 00:14:55.545 "num_base_bdevs_operational": 2, 00:14:55.545 "base_bdevs_list": [ 00:14:55.545 { 00:14:55.545 "name": null, 00:14:55.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.545 "is_configured": false, 00:14:55.545 "data_offset": 0, 00:14:55.545 "data_size": 63488 00:14:55.545 }, 00:14:55.545 { 00:14:55.545 "name": null, 00:14:55.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.545 "is_configured": false, 00:14:55.545 "data_offset": 2048, 00:14:55.545 "data_size": 63488 00:14:55.545 }, 00:14:55.545 { 00:14:55.545 "name": "BaseBdev3", 00:14:55.545 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:55.545 "is_configured": true, 00:14:55.545 "data_offset": 2048, 00:14:55.545 "data_size": 63488 00:14:55.545 }, 00:14:55.545 { 00:14:55.545 "name": "BaseBdev4", 00:14:55.545 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:55.545 "is_configured": true, 00:14:55.545 "data_offset": 2048, 00:14:55.545 "data_size": 63488 00:14:55.545 } 00:14:55.545 ] 00:14:55.545 }' 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.545 04:03:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.814 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.815 "name": "raid_bdev1", 00:14:55.815 "uuid": "ef88e58b-0fd4-4cb5-9263-3ecf733f4b33", 00:14:55.815 "strip_size_kb": 0, 00:14:55.815 "state": "online", 00:14:55.815 "raid_level": "raid1", 00:14:55.815 "superblock": true, 00:14:55.815 "num_base_bdevs": 4, 00:14:55.815 "num_base_bdevs_discovered": 2, 00:14:55.815 "num_base_bdevs_operational": 2, 00:14:55.815 "base_bdevs_list": [ 00:14:55.815 { 00:14:55.815 "name": null, 00:14:55.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.815 "is_configured": false, 00:14:55.815 "data_offset": 0, 00:14:55.815 "data_size": 63488 00:14:55.815 }, 00:14:55.815 { 00:14:55.815 "name": null, 00:14:55.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.815 "is_configured": false, 00:14:55.815 "data_offset": 2048, 00:14:55.815 "data_size": 63488 00:14:55.815 }, 00:14:55.815 { 00:14:55.815 "name": "BaseBdev3", 00:14:55.815 "uuid": "78a2d1a1-f7f1-5740-94b5-d5f970e081e9", 00:14:55.815 "is_configured": true, 00:14:55.815 "data_offset": 2048, 00:14:55.815 "data_size": 63488 00:14:55.815 }, 00:14:55.815 { 00:14:55.815 "name": "BaseBdev4", 00:14:55.815 "uuid": "0d774534-cc68-5d80-b416-fc51eff7237f", 00:14:55.815 "is_configured": true, 00:14:55.815 "data_offset": 2048, 00:14:55.815 "data_size": 63488 00:14:55.815 } 00:14:55.815 ] 00:14:55.815 }' 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79076 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79076 ']' 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79076 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79076 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79076' 00:14:55.815 killing process with pid 79076 00:14:55.815 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79076 00:14:55.815 Received shutdown signal, test time was about 17.935751 seconds 00:14:55.815 00:14:55.815 Latency(us) 00:14:55.815 [2024-11-18T04:03:52.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.815 [2024-11-18T04:03:52.456Z] =================================================================================================================== 00:14:55.815 [2024-11-18T04:03:52.456Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.815 [2024-11-18 04:03:52.397812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.815 [2024-11-18 04:03:52.397943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.815 [2024-11-18 04:03:52.398011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.815 [2024-11-18 04:03:52.398020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, sta 04:03:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79076 00:14:55.815 te offline 00:14:56.402 [2024-11-18 04:03:52.781627] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.343 04:03:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:57.343 00:14:57.343 real 0m21.211s 00:14:57.343 user 0m27.832s 00:14:57.343 sys 0m2.433s 00:14:57.343 04:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:57.343 04:03:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.343 ************************************ 00:14:57.343 END TEST raid_rebuild_test_sb_io 00:14:57.343 ************************************ 00:14:57.343 04:03:53 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:57.343 04:03:53 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:57.343 04:03:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:57.343 04:03:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:57.343 04:03:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.343 ************************************ 00:14:57.343 START TEST raid5f_state_function_test 00:14:57.343 ************************************ 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79802 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79802' 00:14:57.343 Process raid pid: 79802 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79802 00:14:57.343 04:03:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79802 ']' 00:14:57.344 04:03:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.344 04:03:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.344 04:03:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.344 04:03:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.344 04:03:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.603 [2024-11-18 04:03:54.017495] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:57.603 [2024-11-18 04:03:54.017619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.603 [2024-11-18 04:03:54.189955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.863 [2024-11-18 04:03:54.298125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.863 [2024-11-18 04:03:54.490379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.863 [2024-11-18 04:03:54.490411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.433 [2024-11-18 04:03:54.832057] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.433 [2024-11-18 04:03:54.832110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.433 [2024-11-18 04:03:54.832120] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.433 [2024-11-18 04:03:54.832129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.433 [2024-11-18 04:03:54.832136] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:58.433 [2024-11-18 04:03:54.832144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.433 "name": "Existed_Raid", 00:14:58.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.433 "strip_size_kb": 64, 00:14:58.433 "state": "configuring", 00:14:58.433 "raid_level": "raid5f", 00:14:58.433 "superblock": false, 00:14:58.433 "num_base_bdevs": 3, 00:14:58.433 "num_base_bdevs_discovered": 0, 00:14:58.433 "num_base_bdevs_operational": 3, 00:14:58.433 "base_bdevs_list": [ 00:14:58.433 { 00:14:58.433 "name": "BaseBdev1", 00:14:58.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.433 "is_configured": false, 00:14:58.433 "data_offset": 0, 00:14:58.433 "data_size": 0 00:14:58.433 }, 00:14:58.433 { 00:14:58.433 "name": "BaseBdev2", 00:14:58.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.433 "is_configured": false, 00:14:58.433 "data_offset": 0, 00:14:58.433 "data_size": 0 00:14:58.433 }, 00:14:58.433 { 00:14:58.433 "name": "BaseBdev3", 00:14:58.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.433 "is_configured": false, 00:14:58.433 "data_offset": 0, 00:14:58.433 "data_size": 0 00:14:58.433 } 00:14:58.433 ] 00:14:58.433 }' 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.433 04:03:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.693 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:58.693 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.693 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.693 [2024-11-18 04:03:55.287259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.693 [2024-11-18 04:03:55.287335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:58.693 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.693 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:58.693 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.693 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.693 [2024-11-18 04:03:55.299242] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.693 [2024-11-18 04:03:55.299317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.693 [2024-11-18 04:03:55.299344] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.693 [2024-11-18 04:03:55.299367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.693 [2024-11-18 04:03:55.299385] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:58.693 [2024-11-18 04:03:55.299405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:58.693 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.693 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:58.693 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.693 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.954 [2024-11-18 04:03:55.344569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.954 BaseBdev1 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.954 [ 00:14:58.954 { 00:14:58.954 "name": "BaseBdev1", 00:14:58.954 "aliases": [ 00:14:58.954 "6704fa89-15cc-46b4-8ac2-ff1bfe4a7555" 00:14:58.954 ], 00:14:58.954 "product_name": "Malloc disk", 00:14:58.954 "block_size": 512, 00:14:58.954 "num_blocks": 65536, 00:14:58.954 "uuid": "6704fa89-15cc-46b4-8ac2-ff1bfe4a7555", 00:14:58.954 "assigned_rate_limits": { 00:14:58.954 "rw_ios_per_sec": 0, 00:14:58.954 "rw_mbytes_per_sec": 0, 00:14:58.954 "r_mbytes_per_sec": 0, 00:14:58.954 "w_mbytes_per_sec": 0 00:14:58.954 }, 00:14:58.954 "claimed": true, 00:14:58.954 "claim_type": "exclusive_write", 00:14:58.954 "zoned": false, 00:14:58.954 "supported_io_types": { 00:14:58.954 "read": true, 00:14:58.954 "write": true, 00:14:58.954 "unmap": true, 00:14:58.954 "flush": true, 00:14:58.954 "reset": true, 00:14:58.954 "nvme_admin": false, 00:14:58.954 "nvme_io": false, 00:14:58.954 "nvme_io_md": false, 00:14:58.954 "write_zeroes": true, 00:14:58.954 "zcopy": true, 00:14:58.954 "get_zone_info": false, 00:14:58.954 "zone_management": false, 00:14:58.954 "zone_append": false, 00:14:58.954 "compare": false, 00:14:58.954 "compare_and_write": false, 00:14:58.954 "abort": true, 00:14:58.954 "seek_hole": false, 00:14:58.954 "seek_data": false, 00:14:58.954 "copy": true, 00:14:58.954 "nvme_iov_md": false 00:14:58.954 }, 00:14:58.954 "memory_domains": [ 00:14:58.954 { 00:14:58.954 "dma_device_id": "system", 00:14:58.954 "dma_device_type": 1 00:14:58.954 }, 00:14:58.954 { 00:14:58.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.954 "dma_device_type": 2 00:14:58.954 } 00:14:58.954 ], 00:14:58.954 "driver_specific": {} 00:14:58.954 } 00:14:58.954 ] 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.954 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.954 "name": "Existed_Raid", 00:14:58.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.954 "strip_size_kb": 64, 00:14:58.954 "state": "configuring", 00:14:58.954 "raid_level": "raid5f", 00:14:58.954 "superblock": false, 00:14:58.954 "num_base_bdevs": 3, 00:14:58.954 "num_base_bdevs_discovered": 1, 00:14:58.954 "num_base_bdevs_operational": 3, 00:14:58.954 "base_bdevs_list": [ 00:14:58.954 { 00:14:58.954 "name": "BaseBdev1", 00:14:58.954 "uuid": "6704fa89-15cc-46b4-8ac2-ff1bfe4a7555", 00:14:58.954 "is_configured": true, 00:14:58.954 "data_offset": 0, 00:14:58.954 "data_size": 65536 00:14:58.954 }, 00:14:58.954 { 00:14:58.954 "name": "BaseBdev2", 00:14:58.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.954 "is_configured": false, 00:14:58.955 "data_offset": 0, 00:14:58.955 "data_size": 0 00:14:58.955 }, 00:14:58.955 { 00:14:58.955 "name": "BaseBdev3", 00:14:58.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.955 "is_configured": false, 00:14:58.955 "data_offset": 0, 00:14:58.955 "data_size": 0 00:14:58.955 } 00:14:58.955 ] 00:14:58.955 }' 00:14:58.955 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.955 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.215 [2024-11-18 04:03:55.823915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.215 [2024-11-18 04:03:55.824001] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.215 [2024-11-18 04:03:55.831955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.215 [2024-11-18 04:03:55.833678] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.215 [2024-11-18 04:03:55.833758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.215 [2024-11-18 04:03:55.833788] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:59.215 [2024-11-18 04:03:55.833812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.215 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.476 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.476 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.476 "name": "Existed_Raid", 00:14:59.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.476 "strip_size_kb": 64, 00:14:59.476 "state": "configuring", 00:14:59.476 "raid_level": "raid5f", 00:14:59.476 "superblock": false, 00:14:59.476 "num_base_bdevs": 3, 00:14:59.476 "num_base_bdevs_discovered": 1, 00:14:59.476 "num_base_bdevs_operational": 3, 00:14:59.476 "base_bdevs_list": [ 00:14:59.476 { 00:14:59.476 "name": "BaseBdev1", 00:14:59.476 "uuid": "6704fa89-15cc-46b4-8ac2-ff1bfe4a7555", 00:14:59.476 "is_configured": true, 00:14:59.476 "data_offset": 0, 00:14:59.476 "data_size": 65536 00:14:59.476 }, 00:14:59.476 { 00:14:59.476 "name": "BaseBdev2", 00:14:59.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.476 "is_configured": false, 00:14:59.476 "data_offset": 0, 00:14:59.476 "data_size": 0 00:14:59.476 }, 00:14:59.476 { 00:14:59.476 "name": "BaseBdev3", 00:14:59.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.476 "is_configured": false, 00:14:59.476 "data_offset": 0, 00:14:59.476 "data_size": 0 00:14:59.476 } 00:14:59.476 ] 00:14:59.476 }' 00:14:59.476 04:03:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.476 04:03:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.736 [2024-11-18 04:03:56.328862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.736 BaseBdev2 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.736 [ 00:14:59.736 { 00:14:59.736 "name": "BaseBdev2", 00:14:59.736 "aliases": [ 00:14:59.736 "86fc5fad-c97a-4932-92ac-6121f3cf6ff9" 00:14:59.736 ], 00:14:59.736 "product_name": "Malloc disk", 00:14:59.736 "block_size": 512, 00:14:59.736 "num_blocks": 65536, 00:14:59.736 "uuid": "86fc5fad-c97a-4932-92ac-6121f3cf6ff9", 00:14:59.736 "assigned_rate_limits": { 00:14:59.736 "rw_ios_per_sec": 0, 00:14:59.736 "rw_mbytes_per_sec": 0, 00:14:59.736 "r_mbytes_per_sec": 0, 00:14:59.736 "w_mbytes_per_sec": 0 00:14:59.736 }, 00:14:59.736 "claimed": true, 00:14:59.736 "claim_type": "exclusive_write", 00:14:59.736 "zoned": false, 00:14:59.736 "supported_io_types": { 00:14:59.736 "read": true, 00:14:59.736 "write": true, 00:14:59.736 "unmap": true, 00:14:59.736 "flush": true, 00:14:59.736 "reset": true, 00:14:59.736 "nvme_admin": false, 00:14:59.736 "nvme_io": false, 00:14:59.736 "nvme_io_md": false, 00:14:59.736 "write_zeroes": true, 00:14:59.736 "zcopy": true, 00:14:59.736 "get_zone_info": false, 00:14:59.736 "zone_management": false, 00:14:59.736 "zone_append": false, 00:14:59.736 "compare": false, 00:14:59.736 "compare_and_write": false, 00:14:59.736 "abort": true, 00:14:59.736 "seek_hole": false, 00:14:59.736 "seek_data": false, 00:14:59.736 "copy": true, 00:14:59.736 "nvme_iov_md": false 00:14:59.736 }, 00:14:59.736 "memory_domains": [ 00:14:59.736 { 00:14:59.736 "dma_device_id": "system", 00:14:59.736 "dma_device_type": 1 00:14:59.736 }, 00:14:59.736 { 00:14:59.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.736 "dma_device_type": 2 00:14:59.736 } 00:14:59.736 ], 00:14:59.736 "driver_specific": {} 00:14:59.736 } 00:14:59.736 ] 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.736 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.997 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.997 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.997 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.997 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.997 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.997 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.997 "name": "Existed_Raid", 00:14:59.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.997 "strip_size_kb": 64, 00:14:59.997 "state": "configuring", 00:14:59.997 "raid_level": "raid5f", 00:14:59.997 "superblock": false, 00:14:59.997 "num_base_bdevs": 3, 00:14:59.997 "num_base_bdevs_discovered": 2, 00:14:59.997 "num_base_bdevs_operational": 3, 00:14:59.997 "base_bdevs_list": [ 00:14:59.997 { 00:14:59.997 "name": "BaseBdev1", 00:14:59.997 "uuid": "6704fa89-15cc-46b4-8ac2-ff1bfe4a7555", 00:14:59.997 "is_configured": true, 00:14:59.997 "data_offset": 0, 00:14:59.997 "data_size": 65536 00:14:59.997 }, 00:14:59.997 { 00:14:59.997 "name": "BaseBdev2", 00:14:59.997 "uuid": "86fc5fad-c97a-4932-92ac-6121f3cf6ff9", 00:14:59.997 "is_configured": true, 00:14:59.997 "data_offset": 0, 00:14:59.997 "data_size": 65536 00:14:59.997 }, 00:14:59.997 { 00:14:59.997 "name": "BaseBdev3", 00:14:59.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.997 "is_configured": false, 00:14:59.997 "data_offset": 0, 00:14:59.997 "data_size": 0 00:14:59.997 } 00:14:59.997 ] 00:14:59.997 }' 00:14:59.997 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.997 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.257 [2024-11-18 04:03:56.882655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.257 [2024-11-18 04:03:56.882767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:00.257 [2024-11-18 04:03:56.882783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:00.257 [2024-11-18 04:03:56.883104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:00.257 [2024-11-18 04:03:56.888407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:00.257 [2024-11-18 04:03:56.888425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:00.257 [2024-11-18 04:03:56.888712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.257 BaseBdev3 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.257 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.517 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.517 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:00.517 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.517 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.517 [ 00:15:00.517 { 00:15:00.517 "name": "BaseBdev3", 00:15:00.517 "aliases": [ 00:15:00.517 "a2568221-a54f-452b-ba8e-ce858aa87f7b" 00:15:00.517 ], 00:15:00.517 "product_name": "Malloc disk", 00:15:00.517 "block_size": 512, 00:15:00.517 "num_blocks": 65536, 00:15:00.517 "uuid": "a2568221-a54f-452b-ba8e-ce858aa87f7b", 00:15:00.517 "assigned_rate_limits": { 00:15:00.517 "rw_ios_per_sec": 0, 00:15:00.517 "rw_mbytes_per_sec": 0, 00:15:00.517 "r_mbytes_per_sec": 0, 00:15:00.517 "w_mbytes_per_sec": 0 00:15:00.517 }, 00:15:00.517 "claimed": true, 00:15:00.517 "claim_type": "exclusive_write", 00:15:00.517 "zoned": false, 00:15:00.517 "supported_io_types": { 00:15:00.517 "read": true, 00:15:00.517 "write": true, 00:15:00.517 "unmap": true, 00:15:00.517 "flush": true, 00:15:00.517 "reset": true, 00:15:00.517 "nvme_admin": false, 00:15:00.517 "nvme_io": false, 00:15:00.517 "nvme_io_md": false, 00:15:00.517 "write_zeroes": true, 00:15:00.517 "zcopy": true, 00:15:00.517 "get_zone_info": false, 00:15:00.517 "zone_management": false, 00:15:00.517 "zone_append": false, 00:15:00.517 "compare": false, 00:15:00.517 "compare_and_write": false, 00:15:00.518 "abort": true, 00:15:00.518 "seek_hole": false, 00:15:00.518 "seek_data": false, 00:15:00.518 "copy": true, 00:15:00.518 "nvme_iov_md": false 00:15:00.518 }, 00:15:00.518 "memory_domains": [ 00:15:00.518 { 00:15:00.518 "dma_device_id": "system", 00:15:00.518 "dma_device_type": 1 00:15:00.518 }, 00:15:00.518 { 00:15:00.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.518 "dma_device_type": 2 00:15:00.518 } 00:15:00.518 ], 00:15:00.518 "driver_specific": {} 00:15:00.518 } 00:15:00.518 ] 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.518 "name": "Existed_Raid", 00:15:00.518 "uuid": "e35073ff-6269-4371-a691-f752176f97a2", 00:15:00.518 "strip_size_kb": 64, 00:15:00.518 "state": "online", 00:15:00.518 "raid_level": "raid5f", 00:15:00.518 "superblock": false, 00:15:00.518 "num_base_bdevs": 3, 00:15:00.518 "num_base_bdevs_discovered": 3, 00:15:00.518 "num_base_bdevs_operational": 3, 00:15:00.518 "base_bdevs_list": [ 00:15:00.518 { 00:15:00.518 "name": "BaseBdev1", 00:15:00.518 "uuid": "6704fa89-15cc-46b4-8ac2-ff1bfe4a7555", 00:15:00.518 "is_configured": true, 00:15:00.518 "data_offset": 0, 00:15:00.518 "data_size": 65536 00:15:00.518 }, 00:15:00.518 { 00:15:00.518 "name": "BaseBdev2", 00:15:00.518 "uuid": "86fc5fad-c97a-4932-92ac-6121f3cf6ff9", 00:15:00.518 "is_configured": true, 00:15:00.518 "data_offset": 0, 00:15:00.518 "data_size": 65536 00:15:00.518 }, 00:15:00.518 { 00:15:00.518 "name": "BaseBdev3", 00:15:00.518 "uuid": "a2568221-a54f-452b-ba8e-ce858aa87f7b", 00:15:00.518 "is_configured": true, 00:15:00.518 "data_offset": 0, 00:15:00.518 "data_size": 65536 00:15:00.518 } 00:15:00.518 ] 00:15:00.518 }' 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.518 04:03:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:00.778 [2024-11-18 04:03:57.346122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.778 "name": "Existed_Raid", 00:15:00.778 "aliases": [ 00:15:00.778 "e35073ff-6269-4371-a691-f752176f97a2" 00:15:00.778 ], 00:15:00.778 "product_name": "Raid Volume", 00:15:00.778 "block_size": 512, 00:15:00.778 "num_blocks": 131072, 00:15:00.778 "uuid": "e35073ff-6269-4371-a691-f752176f97a2", 00:15:00.778 "assigned_rate_limits": { 00:15:00.778 "rw_ios_per_sec": 0, 00:15:00.778 "rw_mbytes_per_sec": 0, 00:15:00.778 "r_mbytes_per_sec": 0, 00:15:00.778 "w_mbytes_per_sec": 0 00:15:00.778 }, 00:15:00.778 "claimed": false, 00:15:00.778 "zoned": false, 00:15:00.778 "supported_io_types": { 00:15:00.778 "read": true, 00:15:00.778 "write": true, 00:15:00.778 "unmap": false, 00:15:00.778 "flush": false, 00:15:00.778 "reset": true, 00:15:00.778 "nvme_admin": false, 00:15:00.778 "nvme_io": false, 00:15:00.778 "nvme_io_md": false, 00:15:00.778 "write_zeroes": true, 00:15:00.778 "zcopy": false, 00:15:00.778 "get_zone_info": false, 00:15:00.778 "zone_management": false, 00:15:00.778 "zone_append": false, 00:15:00.778 "compare": false, 00:15:00.778 "compare_and_write": false, 00:15:00.778 "abort": false, 00:15:00.778 "seek_hole": false, 00:15:00.778 "seek_data": false, 00:15:00.778 "copy": false, 00:15:00.778 "nvme_iov_md": false 00:15:00.778 }, 00:15:00.778 "driver_specific": { 00:15:00.778 "raid": { 00:15:00.778 "uuid": "e35073ff-6269-4371-a691-f752176f97a2", 00:15:00.778 "strip_size_kb": 64, 00:15:00.778 "state": "online", 00:15:00.778 "raid_level": "raid5f", 00:15:00.778 "superblock": false, 00:15:00.778 "num_base_bdevs": 3, 00:15:00.778 "num_base_bdevs_discovered": 3, 00:15:00.778 "num_base_bdevs_operational": 3, 00:15:00.778 "base_bdevs_list": [ 00:15:00.778 { 00:15:00.778 "name": "BaseBdev1", 00:15:00.778 "uuid": "6704fa89-15cc-46b4-8ac2-ff1bfe4a7555", 00:15:00.778 "is_configured": true, 00:15:00.778 "data_offset": 0, 00:15:00.778 "data_size": 65536 00:15:00.778 }, 00:15:00.778 { 00:15:00.778 "name": "BaseBdev2", 00:15:00.778 "uuid": "86fc5fad-c97a-4932-92ac-6121f3cf6ff9", 00:15:00.778 "is_configured": true, 00:15:00.778 "data_offset": 0, 00:15:00.778 "data_size": 65536 00:15:00.778 }, 00:15:00.778 { 00:15:00.778 "name": "BaseBdev3", 00:15:00.778 "uuid": "a2568221-a54f-452b-ba8e-ce858aa87f7b", 00:15:00.778 "is_configured": true, 00:15:00.778 "data_offset": 0, 00:15:00.778 "data_size": 65536 00:15:00.778 } 00:15:00.778 ] 00:15:00.778 } 00:15:00.778 } 00:15:00.778 }' 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.778 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:00.778 BaseBdev2 00:15:00.778 BaseBdev3' 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.039 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.039 [2024-11-18 04:03:57.601484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.299 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.300 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.300 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.300 "name": "Existed_Raid", 00:15:01.300 "uuid": "e35073ff-6269-4371-a691-f752176f97a2", 00:15:01.300 "strip_size_kb": 64, 00:15:01.300 "state": "online", 00:15:01.300 "raid_level": "raid5f", 00:15:01.300 "superblock": false, 00:15:01.300 "num_base_bdevs": 3, 00:15:01.300 "num_base_bdevs_discovered": 2, 00:15:01.300 "num_base_bdevs_operational": 2, 00:15:01.300 "base_bdevs_list": [ 00:15:01.300 { 00:15:01.300 "name": null, 00:15:01.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.300 "is_configured": false, 00:15:01.300 "data_offset": 0, 00:15:01.300 "data_size": 65536 00:15:01.300 }, 00:15:01.300 { 00:15:01.300 "name": "BaseBdev2", 00:15:01.300 "uuid": "86fc5fad-c97a-4932-92ac-6121f3cf6ff9", 00:15:01.300 "is_configured": true, 00:15:01.300 "data_offset": 0, 00:15:01.300 "data_size": 65536 00:15:01.300 }, 00:15:01.300 { 00:15:01.300 "name": "BaseBdev3", 00:15:01.300 "uuid": "a2568221-a54f-452b-ba8e-ce858aa87f7b", 00:15:01.300 "is_configured": true, 00:15:01.300 "data_offset": 0, 00:15:01.300 "data_size": 65536 00:15:01.300 } 00:15:01.300 ] 00:15:01.300 }' 00:15:01.300 04:03:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.300 04:03:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.560 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.560 [2024-11-18 04:03:58.145851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:01.560 [2024-11-18 04:03:58.145942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.820 [2024-11-18 04:03:58.234159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.820 [2024-11-18 04:03:58.294077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:01.820 [2024-11-18 04:03:58.294119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.820 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 BaseBdev2 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 [ 00:15:02.081 { 00:15:02.081 "name": "BaseBdev2", 00:15:02.081 "aliases": [ 00:15:02.081 "524165c8-9e81-4a56-b5a9-8bc165adc31a" 00:15:02.081 ], 00:15:02.081 "product_name": "Malloc disk", 00:15:02.081 "block_size": 512, 00:15:02.081 "num_blocks": 65536, 00:15:02.081 "uuid": "524165c8-9e81-4a56-b5a9-8bc165adc31a", 00:15:02.081 "assigned_rate_limits": { 00:15:02.081 "rw_ios_per_sec": 0, 00:15:02.081 "rw_mbytes_per_sec": 0, 00:15:02.081 "r_mbytes_per_sec": 0, 00:15:02.081 "w_mbytes_per_sec": 0 00:15:02.081 }, 00:15:02.081 "claimed": false, 00:15:02.081 "zoned": false, 00:15:02.081 "supported_io_types": { 00:15:02.081 "read": true, 00:15:02.081 "write": true, 00:15:02.081 "unmap": true, 00:15:02.081 "flush": true, 00:15:02.081 "reset": true, 00:15:02.081 "nvme_admin": false, 00:15:02.081 "nvme_io": false, 00:15:02.081 "nvme_io_md": false, 00:15:02.081 "write_zeroes": true, 00:15:02.081 "zcopy": true, 00:15:02.081 "get_zone_info": false, 00:15:02.081 "zone_management": false, 00:15:02.081 "zone_append": false, 00:15:02.081 "compare": false, 00:15:02.081 "compare_and_write": false, 00:15:02.081 "abort": true, 00:15:02.081 "seek_hole": false, 00:15:02.081 "seek_data": false, 00:15:02.081 "copy": true, 00:15:02.081 "nvme_iov_md": false 00:15:02.081 }, 00:15:02.081 "memory_domains": [ 00:15:02.081 { 00:15:02.081 "dma_device_id": "system", 00:15:02.081 "dma_device_type": 1 00:15:02.081 }, 00:15:02.081 { 00:15:02.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.081 "dma_device_type": 2 00:15:02.081 } 00:15:02.081 ], 00:15:02.081 "driver_specific": {} 00:15:02.081 } 00:15:02.081 ] 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 BaseBdev3 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.081 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 [ 00:15:02.081 { 00:15:02.081 "name": "BaseBdev3", 00:15:02.081 "aliases": [ 00:15:02.082 "eb5c6abf-d120-47f6-9a75-c3833f47169d" 00:15:02.082 ], 00:15:02.082 "product_name": "Malloc disk", 00:15:02.082 "block_size": 512, 00:15:02.082 "num_blocks": 65536, 00:15:02.082 "uuid": "eb5c6abf-d120-47f6-9a75-c3833f47169d", 00:15:02.082 "assigned_rate_limits": { 00:15:02.082 "rw_ios_per_sec": 0, 00:15:02.082 "rw_mbytes_per_sec": 0, 00:15:02.082 "r_mbytes_per_sec": 0, 00:15:02.082 "w_mbytes_per_sec": 0 00:15:02.082 }, 00:15:02.082 "claimed": false, 00:15:02.082 "zoned": false, 00:15:02.082 "supported_io_types": { 00:15:02.082 "read": true, 00:15:02.082 "write": true, 00:15:02.082 "unmap": true, 00:15:02.082 "flush": true, 00:15:02.082 "reset": true, 00:15:02.082 "nvme_admin": false, 00:15:02.082 "nvme_io": false, 00:15:02.082 "nvme_io_md": false, 00:15:02.082 "write_zeroes": true, 00:15:02.082 "zcopy": true, 00:15:02.082 "get_zone_info": false, 00:15:02.082 "zone_management": false, 00:15:02.082 "zone_append": false, 00:15:02.082 "compare": false, 00:15:02.082 "compare_and_write": false, 00:15:02.082 "abort": true, 00:15:02.082 "seek_hole": false, 00:15:02.082 "seek_data": false, 00:15:02.082 "copy": true, 00:15:02.082 "nvme_iov_md": false 00:15:02.082 }, 00:15:02.082 "memory_domains": [ 00:15:02.082 { 00:15:02.082 "dma_device_id": "system", 00:15:02.082 "dma_device_type": 1 00:15:02.082 }, 00:15:02.082 { 00:15:02.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.082 "dma_device_type": 2 00:15:02.082 } 00:15:02.082 ], 00:15:02.082 "driver_specific": {} 00:15:02.082 } 00:15:02.082 ] 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.082 [2024-11-18 04:03:58.599208] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:02.082 [2024-11-18 04:03:58.599308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:02.082 [2024-11-18 04:03:58.599349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.082 [2024-11-18 04:03:58.601111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.082 "name": "Existed_Raid", 00:15:02.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.082 "strip_size_kb": 64, 00:15:02.082 "state": "configuring", 00:15:02.082 "raid_level": "raid5f", 00:15:02.082 "superblock": false, 00:15:02.082 "num_base_bdevs": 3, 00:15:02.082 "num_base_bdevs_discovered": 2, 00:15:02.082 "num_base_bdevs_operational": 3, 00:15:02.082 "base_bdevs_list": [ 00:15:02.082 { 00:15:02.082 "name": "BaseBdev1", 00:15:02.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.082 "is_configured": false, 00:15:02.082 "data_offset": 0, 00:15:02.082 "data_size": 0 00:15:02.082 }, 00:15:02.082 { 00:15:02.082 "name": "BaseBdev2", 00:15:02.082 "uuid": "524165c8-9e81-4a56-b5a9-8bc165adc31a", 00:15:02.082 "is_configured": true, 00:15:02.082 "data_offset": 0, 00:15:02.082 "data_size": 65536 00:15:02.082 }, 00:15:02.082 { 00:15:02.082 "name": "BaseBdev3", 00:15:02.082 "uuid": "eb5c6abf-d120-47f6-9a75-c3833f47169d", 00:15:02.082 "is_configured": true, 00:15:02.082 "data_offset": 0, 00:15:02.082 "data_size": 65536 00:15:02.082 } 00:15:02.082 ] 00:15:02.082 }' 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.082 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.652 04:03:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:02.652 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.652 04:03:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.652 [2024-11-18 04:03:58.998468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.652 "name": "Existed_Raid", 00:15:02.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.652 "strip_size_kb": 64, 00:15:02.652 "state": "configuring", 00:15:02.652 "raid_level": "raid5f", 00:15:02.652 "superblock": false, 00:15:02.652 "num_base_bdevs": 3, 00:15:02.652 "num_base_bdevs_discovered": 1, 00:15:02.652 "num_base_bdevs_operational": 3, 00:15:02.652 "base_bdevs_list": [ 00:15:02.652 { 00:15:02.652 "name": "BaseBdev1", 00:15:02.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.652 "is_configured": false, 00:15:02.652 "data_offset": 0, 00:15:02.652 "data_size": 0 00:15:02.652 }, 00:15:02.652 { 00:15:02.652 "name": null, 00:15:02.652 "uuid": "524165c8-9e81-4a56-b5a9-8bc165adc31a", 00:15:02.652 "is_configured": false, 00:15:02.652 "data_offset": 0, 00:15:02.652 "data_size": 65536 00:15:02.652 }, 00:15:02.652 { 00:15:02.652 "name": "BaseBdev3", 00:15:02.652 "uuid": "eb5c6abf-d120-47f6-9a75-c3833f47169d", 00:15:02.652 "is_configured": true, 00:15:02.652 "data_offset": 0, 00:15:02.652 "data_size": 65536 00:15:02.652 } 00:15:02.652 ] 00:15:02.652 }' 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.652 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.915 [2024-11-18 04:03:59.454500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.915 BaseBdev1 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.915 [ 00:15:02.915 { 00:15:02.915 "name": "BaseBdev1", 00:15:02.915 "aliases": [ 00:15:02.915 "878d8238-6631-4ad8-a7f6-14f666943331" 00:15:02.915 ], 00:15:02.915 "product_name": "Malloc disk", 00:15:02.915 "block_size": 512, 00:15:02.915 "num_blocks": 65536, 00:15:02.915 "uuid": "878d8238-6631-4ad8-a7f6-14f666943331", 00:15:02.915 "assigned_rate_limits": { 00:15:02.915 "rw_ios_per_sec": 0, 00:15:02.915 "rw_mbytes_per_sec": 0, 00:15:02.915 "r_mbytes_per_sec": 0, 00:15:02.915 "w_mbytes_per_sec": 0 00:15:02.915 }, 00:15:02.915 "claimed": true, 00:15:02.915 "claim_type": "exclusive_write", 00:15:02.915 "zoned": false, 00:15:02.915 "supported_io_types": { 00:15:02.915 "read": true, 00:15:02.915 "write": true, 00:15:02.915 "unmap": true, 00:15:02.915 "flush": true, 00:15:02.915 "reset": true, 00:15:02.915 "nvme_admin": false, 00:15:02.915 "nvme_io": false, 00:15:02.915 "nvme_io_md": false, 00:15:02.915 "write_zeroes": true, 00:15:02.915 "zcopy": true, 00:15:02.915 "get_zone_info": false, 00:15:02.915 "zone_management": false, 00:15:02.915 "zone_append": false, 00:15:02.915 "compare": false, 00:15:02.915 "compare_and_write": false, 00:15:02.915 "abort": true, 00:15:02.915 "seek_hole": false, 00:15:02.915 "seek_data": false, 00:15:02.915 "copy": true, 00:15:02.915 "nvme_iov_md": false 00:15:02.915 }, 00:15:02.915 "memory_domains": [ 00:15:02.915 { 00:15:02.915 "dma_device_id": "system", 00:15:02.915 "dma_device_type": 1 00:15:02.915 }, 00:15:02.915 { 00:15:02.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.915 "dma_device_type": 2 00:15:02.915 } 00:15:02.915 ], 00:15:02.915 "driver_specific": {} 00:15:02.915 } 00:15:02.915 ] 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.915 "name": "Existed_Raid", 00:15:02.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.915 "strip_size_kb": 64, 00:15:02.915 "state": "configuring", 00:15:02.915 "raid_level": "raid5f", 00:15:02.915 "superblock": false, 00:15:02.915 "num_base_bdevs": 3, 00:15:02.915 "num_base_bdevs_discovered": 2, 00:15:02.915 "num_base_bdevs_operational": 3, 00:15:02.915 "base_bdevs_list": [ 00:15:02.915 { 00:15:02.915 "name": "BaseBdev1", 00:15:02.915 "uuid": "878d8238-6631-4ad8-a7f6-14f666943331", 00:15:02.915 "is_configured": true, 00:15:02.915 "data_offset": 0, 00:15:02.915 "data_size": 65536 00:15:02.915 }, 00:15:02.915 { 00:15:02.915 "name": null, 00:15:02.915 "uuid": "524165c8-9e81-4a56-b5a9-8bc165adc31a", 00:15:02.915 "is_configured": false, 00:15:02.915 "data_offset": 0, 00:15:02.915 "data_size": 65536 00:15:02.915 }, 00:15:02.915 { 00:15:02.915 "name": "BaseBdev3", 00:15:02.915 "uuid": "eb5c6abf-d120-47f6-9a75-c3833f47169d", 00:15:02.915 "is_configured": true, 00:15:02.915 "data_offset": 0, 00:15:02.915 "data_size": 65536 00:15:02.915 } 00:15:02.915 ] 00:15:02.915 }' 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.915 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.485 [2024-11-18 04:03:59.961679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.485 04:03:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.485 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.485 "name": "Existed_Raid", 00:15:03.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.486 "strip_size_kb": 64, 00:15:03.486 "state": "configuring", 00:15:03.486 "raid_level": "raid5f", 00:15:03.486 "superblock": false, 00:15:03.486 "num_base_bdevs": 3, 00:15:03.486 "num_base_bdevs_discovered": 1, 00:15:03.486 "num_base_bdevs_operational": 3, 00:15:03.486 "base_bdevs_list": [ 00:15:03.486 { 00:15:03.486 "name": "BaseBdev1", 00:15:03.486 "uuid": "878d8238-6631-4ad8-a7f6-14f666943331", 00:15:03.486 "is_configured": true, 00:15:03.486 "data_offset": 0, 00:15:03.486 "data_size": 65536 00:15:03.486 }, 00:15:03.486 { 00:15:03.486 "name": null, 00:15:03.486 "uuid": "524165c8-9e81-4a56-b5a9-8bc165adc31a", 00:15:03.486 "is_configured": false, 00:15:03.486 "data_offset": 0, 00:15:03.486 "data_size": 65536 00:15:03.486 }, 00:15:03.486 { 00:15:03.486 "name": null, 00:15:03.486 "uuid": "eb5c6abf-d120-47f6-9a75-c3833f47169d", 00:15:03.486 "is_configured": false, 00:15:03.486 "data_offset": 0, 00:15:03.486 "data_size": 65536 00:15:03.486 } 00:15:03.486 ] 00:15:03.486 }' 00:15:03.486 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.486 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.055 [2024-11-18 04:04:00.436892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.055 "name": "Existed_Raid", 00:15:04.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.055 "strip_size_kb": 64, 00:15:04.055 "state": "configuring", 00:15:04.055 "raid_level": "raid5f", 00:15:04.055 "superblock": false, 00:15:04.055 "num_base_bdevs": 3, 00:15:04.055 "num_base_bdevs_discovered": 2, 00:15:04.055 "num_base_bdevs_operational": 3, 00:15:04.055 "base_bdevs_list": [ 00:15:04.055 { 00:15:04.055 "name": "BaseBdev1", 00:15:04.055 "uuid": "878d8238-6631-4ad8-a7f6-14f666943331", 00:15:04.055 "is_configured": true, 00:15:04.055 "data_offset": 0, 00:15:04.055 "data_size": 65536 00:15:04.055 }, 00:15:04.055 { 00:15:04.055 "name": null, 00:15:04.055 "uuid": "524165c8-9e81-4a56-b5a9-8bc165adc31a", 00:15:04.055 "is_configured": false, 00:15:04.055 "data_offset": 0, 00:15:04.055 "data_size": 65536 00:15:04.055 }, 00:15:04.055 { 00:15:04.055 "name": "BaseBdev3", 00:15:04.055 "uuid": "eb5c6abf-d120-47f6-9a75-c3833f47169d", 00:15:04.055 "is_configured": true, 00:15:04.055 "data_offset": 0, 00:15:04.055 "data_size": 65536 00:15:04.055 } 00:15:04.055 ] 00:15:04.055 }' 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.055 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.315 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.315 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.315 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.315 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:04.315 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.315 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:04.315 04:04:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:04.315 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.315 04:04:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.315 [2024-11-18 04:04:00.932040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.575 "name": "Existed_Raid", 00:15:04.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.575 "strip_size_kb": 64, 00:15:04.575 "state": "configuring", 00:15:04.575 "raid_level": "raid5f", 00:15:04.575 "superblock": false, 00:15:04.575 "num_base_bdevs": 3, 00:15:04.575 "num_base_bdevs_discovered": 1, 00:15:04.575 "num_base_bdevs_operational": 3, 00:15:04.575 "base_bdevs_list": [ 00:15:04.575 { 00:15:04.575 "name": null, 00:15:04.575 "uuid": "878d8238-6631-4ad8-a7f6-14f666943331", 00:15:04.575 "is_configured": false, 00:15:04.575 "data_offset": 0, 00:15:04.575 "data_size": 65536 00:15:04.575 }, 00:15:04.575 { 00:15:04.575 "name": null, 00:15:04.575 "uuid": "524165c8-9e81-4a56-b5a9-8bc165adc31a", 00:15:04.575 "is_configured": false, 00:15:04.575 "data_offset": 0, 00:15:04.575 "data_size": 65536 00:15:04.575 }, 00:15:04.575 { 00:15:04.575 "name": "BaseBdev3", 00:15:04.575 "uuid": "eb5c6abf-d120-47f6-9a75-c3833f47169d", 00:15:04.575 "is_configured": true, 00:15:04.575 "data_offset": 0, 00:15:04.575 "data_size": 65536 00:15:04.575 } 00:15:04.575 ] 00:15:04.575 }' 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.575 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.834 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.834 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:04.834 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.834 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.834 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.094 [2024-11-18 04:04:01.498131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.094 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.094 "name": "Existed_Raid", 00:15:05.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.094 "strip_size_kb": 64, 00:15:05.094 "state": "configuring", 00:15:05.094 "raid_level": "raid5f", 00:15:05.094 "superblock": false, 00:15:05.094 "num_base_bdevs": 3, 00:15:05.094 "num_base_bdevs_discovered": 2, 00:15:05.094 "num_base_bdevs_operational": 3, 00:15:05.094 "base_bdevs_list": [ 00:15:05.094 { 00:15:05.094 "name": null, 00:15:05.094 "uuid": "878d8238-6631-4ad8-a7f6-14f666943331", 00:15:05.094 "is_configured": false, 00:15:05.094 "data_offset": 0, 00:15:05.094 "data_size": 65536 00:15:05.094 }, 00:15:05.094 { 00:15:05.094 "name": "BaseBdev2", 00:15:05.094 "uuid": "524165c8-9e81-4a56-b5a9-8bc165adc31a", 00:15:05.094 "is_configured": true, 00:15:05.094 "data_offset": 0, 00:15:05.094 "data_size": 65536 00:15:05.094 }, 00:15:05.094 { 00:15:05.094 "name": "BaseBdev3", 00:15:05.094 "uuid": "eb5c6abf-d120-47f6-9a75-c3833f47169d", 00:15:05.094 "is_configured": true, 00:15:05.094 "data_offset": 0, 00:15:05.094 "data_size": 65536 00:15:05.094 } 00:15:05.094 ] 00:15:05.095 }' 00:15:05.095 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.095 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.354 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.354 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.354 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.354 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:05.354 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.354 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:05.354 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.354 04:04:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:05.354 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.354 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.615 04:04:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 878d8238-6631-4ad8-a7f6-14f666943331 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.615 [2024-11-18 04:04:02.067978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:05.615 [2024-11-18 04:04:02.068091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:05.615 [2024-11-18 04:04:02.068106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:05.615 [2024-11-18 04:04:02.068377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:05.615 [2024-11-18 04:04:02.073619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:05.615 [2024-11-18 04:04:02.073638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:05.615 [2024-11-18 04:04:02.073884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.615 NewBaseBdev 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.615 [ 00:15:05.615 { 00:15:05.615 "name": "NewBaseBdev", 00:15:05.615 "aliases": [ 00:15:05.615 "878d8238-6631-4ad8-a7f6-14f666943331" 00:15:05.615 ], 00:15:05.615 "product_name": "Malloc disk", 00:15:05.615 "block_size": 512, 00:15:05.615 "num_blocks": 65536, 00:15:05.615 "uuid": "878d8238-6631-4ad8-a7f6-14f666943331", 00:15:05.615 "assigned_rate_limits": { 00:15:05.615 "rw_ios_per_sec": 0, 00:15:05.615 "rw_mbytes_per_sec": 0, 00:15:05.615 "r_mbytes_per_sec": 0, 00:15:05.615 "w_mbytes_per_sec": 0 00:15:05.615 }, 00:15:05.615 "claimed": true, 00:15:05.615 "claim_type": "exclusive_write", 00:15:05.615 "zoned": false, 00:15:05.615 "supported_io_types": { 00:15:05.615 "read": true, 00:15:05.615 "write": true, 00:15:05.615 "unmap": true, 00:15:05.615 "flush": true, 00:15:05.615 "reset": true, 00:15:05.615 "nvme_admin": false, 00:15:05.615 "nvme_io": false, 00:15:05.615 "nvme_io_md": false, 00:15:05.615 "write_zeroes": true, 00:15:05.615 "zcopy": true, 00:15:05.615 "get_zone_info": false, 00:15:05.615 "zone_management": false, 00:15:05.615 "zone_append": false, 00:15:05.615 "compare": false, 00:15:05.615 "compare_and_write": false, 00:15:05.615 "abort": true, 00:15:05.615 "seek_hole": false, 00:15:05.615 "seek_data": false, 00:15:05.615 "copy": true, 00:15:05.615 "nvme_iov_md": false 00:15:05.615 }, 00:15:05.615 "memory_domains": [ 00:15:05.615 { 00:15:05.615 "dma_device_id": "system", 00:15:05.615 "dma_device_type": 1 00:15:05.615 }, 00:15:05.615 { 00:15:05.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.615 "dma_device_type": 2 00:15:05.615 } 00:15:05.615 ], 00:15:05.615 "driver_specific": {} 00:15:05.615 } 00:15:05.615 ] 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.615 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.615 "name": "Existed_Raid", 00:15:05.615 "uuid": "d96e0cd4-a897-4036-9b44-44c5355fbbb1", 00:15:05.615 "strip_size_kb": 64, 00:15:05.615 "state": "online", 00:15:05.615 "raid_level": "raid5f", 00:15:05.615 "superblock": false, 00:15:05.615 "num_base_bdevs": 3, 00:15:05.615 "num_base_bdevs_discovered": 3, 00:15:05.615 "num_base_bdevs_operational": 3, 00:15:05.615 "base_bdevs_list": [ 00:15:05.615 { 00:15:05.615 "name": "NewBaseBdev", 00:15:05.615 "uuid": "878d8238-6631-4ad8-a7f6-14f666943331", 00:15:05.615 "is_configured": true, 00:15:05.615 "data_offset": 0, 00:15:05.615 "data_size": 65536 00:15:05.615 }, 00:15:05.615 { 00:15:05.615 "name": "BaseBdev2", 00:15:05.616 "uuid": "524165c8-9e81-4a56-b5a9-8bc165adc31a", 00:15:05.616 "is_configured": true, 00:15:05.616 "data_offset": 0, 00:15:05.616 "data_size": 65536 00:15:05.616 }, 00:15:05.616 { 00:15:05.616 "name": "BaseBdev3", 00:15:05.616 "uuid": "eb5c6abf-d120-47f6-9a75-c3833f47169d", 00:15:05.616 "is_configured": true, 00:15:05.616 "data_offset": 0, 00:15:05.616 "data_size": 65536 00:15:05.616 } 00:15:05.616 ] 00:15:05.616 }' 00:15:05.616 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.616 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.184 [2024-11-18 04:04:02.579402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:06.184 "name": "Existed_Raid", 00:15:06.184 "aliases": [ 00:15:06.184 "d96e0cd4-a897-4036-9b44-44c5355fbbb1" 00:15:06.184 ], 00:15:06.184 "product_name": "Raid Volume", 00:15:06.184 "block_size": 512, 00:15:06.184 "num_blocks": 131072, 00:15:06.184 "uuid": "d96e0cd4-a897-4036-9b44-44c5355fbbb1", 00:15:06.184 "assigned_rate_limits": { 00:15:06.184 "rw_ios_per_sec": 0, 00:15:06.184 "rw_mbytes_per_sec": 0, 00:15:06.184 "r_mbytes_per_sec": 0, 00:15:06.184 "w_mbytes_per_sec": 0 00:15:06.184 }, 00:15:06.184 "claimed": false, 00:15:06.184 "zoned": false, 00:15:06.184 "supported_io_types": { 00:15:06.184 "read": true, 00:15:06.184 "write": true, 00:15:06.184 "unmap": false, 00:15:06.184 "flush": false, 00:15:06.184 "reset": true, 00:15:06.184 "nvme_admin": false, 00:15:06.184 "nvme_io": false, 00:15:06.184 "nvme_io_md": false, 00:15:06.184 "write_zeroes": true, 00:15:06.184 "zcopy": false, 00:15:06.184 "get_zone_info": false, 00:15:06.184 "zone_management": false, 00:15:06.184 "zone_append": false, 00:15:06.184 "compare": false, 00:15:06.184 "compare_and_write": false, 00:15:06.184 "abort": false, 00:15:06.184 "seek_hole": false, 00:15:06.184 "seek_data": false, 00:15:06.184 "copy": false, 00:15:06.184 "nvme_iov_md": false 00:15:06.184 }, 00:15:06.184 "driver_specific": { 00:15:06.184 "raid": { 00:15:06.184 "uuid": "d96e0cd4-a897-4036-9b44-44c5355fbbb1", 00:15:06.184 "strip_size_kb": 64, 00:15:06.184 "state": "online", 00:15:06.184 "raid_level": "raid5f", 00:15:06.184 "superblock": false, 00:15:06.184 "num_base_bdevs": 3, 00:15:06.184 "num_base_bdevs_discovered": 3, 00:15:06.184 "num_base_bdevs_operational": 3, 00:15:06.184 "base_bdevs_list": [ 00:15:06.184 { 00:15:06.184 "name": "NewBaseBdev", 00:15:06.184 "uuid": "878d8238-6631-4ad8-a7f6-14f666943331", 00:15:06.184 "is_configured": true, 00:15:06.184 "data_offset": 0, 00:15:06.184 "data_size": 65536 00:15:06.184 }, 00:15:06.184 { 00:15:06.184 "name": "BaseBdev2", 00:15:06.184 "uuid": "524165c8-9e81-4a56-b5a9-8bc165adc31a", 00:15:06.184 "is_configured": true, 00:15:06.184 "data_offset": 0, 00:15:06.184 "data_size": 65536 00:15:06.184 }, 00:15:06.184 { 00:15:06.184 "name": "BaseBdev3", 00:15:06.184 "uuid": "eb5c6abf-d120-47f6-9a75-c3833f47169d", 00:15:06.184 "is_configured": true, 00:15:06.184 "data_offset": 0, 00:15:06.184 "data_size": 65536 00:15:06.184 } 00:15:06.184 ] 00:15:06.184 } 00:15:06.184 } 00:15:06.184 }' 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:06.184 BaseBdev2 00:15:06.184 BaseBdev3' 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.184 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.444 [2024-11-18 04:04:02.866747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.444 [2024-11-18 04:04:02.866771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.444 [2024-11-18 04:04:02.866847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.444 [2024-11-18 04:04:02.867107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.444 [2024-11-18 04:04:02.867119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79802 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79802 ']' 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79802 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.444 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79802 00:15:06.445 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.445 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.445 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79802' 00:15:06.445 killing process with pid 79802 00:15:06.445 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79802 00:15:06.445 [2024-11-18 04:04:02.907247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.445 04:04:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79802 00:15:06.704 [2024-11-18 04:04:03.184452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.643 04:04:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:07.643 00:15:07.643 real 0m10.293s 00:15:07.643 user 0m16.451s 00:15:07.643 sys 0m1.796s 00:15:07.643 04:04:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.643 04:04:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.643 ************************************ 00:15:07.643 END TEST raid5f_state_function_test 00:15:07.643 ************************************ 00:15:07.643 04:04:04 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:07.643 04:04:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:07.643 04:04:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.643 04:04:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.903 ************************************ 00:15:07.903 START TEST raid5f_state_function_test_sb 00:15:07.903 ************************************ 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80419 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80419' 00:15:07.903 Process raid pid: 80419 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80419 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80419 ']' 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.903 04:04:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.903 [2024-11-18 04:04:04.382982] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:07.903 [2024-11-18 04:04:04.383176] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.163 [2024-11-18 04:04:04.555339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.163 [2024-11-18 04:04:04.662067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.423 [2024-11-18 04:04:04.854005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.423 [2024-11-18 04:04:04.854115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.684 [2024-11-18 04:04:05.203795] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:08.684 [2024-11-18 04:04:05.203876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:08.684 [2024-11-18 04:04:05.203887] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.684 [2024-11-18 04:04:05.203897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.684 [2024-11-18 04:04:05.203903] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:08.684 [2024-11-18 04:04:05.203912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.684 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.684 "name": "Existed_Raid", 00:15:08.684 "uuid": "9deac217-41dd-4903-aa81-a0696c413f12", 00:15:08.684 "strip_size_kb": 64, 00:15:08.684 "state": "configuring", 00:15:08.685 "raid_level": "raid5f", 00:15:08.685 "superblock": true, 00:15:08.685 "num_base_bdevs": 3, 00:15:08.685 "num_base_bdevs_discovered": 0, 00:15:08.685 "num_base_bdevs_operational": 3, 00:15:08.685 "base_bdevs_list": [ 00:15:08.685 { 00:15:08.685 "name": "BaseBdev1", 00:15:08.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.685 "is_configured": false, 00:15:08.685 "data_offset": 0, 00:15:08.685 "data_size": 0 00:15:08.685 }, 00:15:08.685 { 00:15:08.685 "name": "BaseBdev2", 00:15:08.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.685 "is_configured": false, 00:15:08.685 "data_offset": 0, 00:15:08.685 "data_size": 0 00:15:08.685 }, 00:15:08.685 { 00:15:08.685 "name": "BaseBdev3", 00:15:08.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.685 "is_configured": false, 00:15:08.685 "data_offset": 0, 00:15:08.685 "data_size": 0 00:15:08.685 } 00:15:08.685 ] 00:15:08.685 }' 00:15:08.685 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.685 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 [2024-11-18 04:04:05.638957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:09.255 [2024-11-18 04:04:05.639032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 [2024-11-18 04:04:05.650958] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.255 [2024-11-18 04:04:05.651050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.255 [2024-11-18 04:04:05.651077] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.255 [2024-11-18 04:04:05.651099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.255 [2024-11-18 04:04:05.651117] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:09.255 [2024-11-18 04:04:05.651137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.255 [2024-11-18 04:04:05.696752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.255 BaseBdev1 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.255 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.256 [ 00:15:09.256 { 00:15:09.256 "name": "BaseBdev1", 00:15:09.256 "aliases": [ 00:15:09.256 "808dd98e-db63-4257-aeed-e4972ef75e9b" 00:15:09.256 ], 00:15:09.256 "product_name": "Malloc disk", 00:15:09.256 "block_size": 512, 00:15:09.256 "num_blocks": 65536, 00:15:09.256 "uuid": "808dd98e-db63-4257-aeed-e4972ef75e9b", 00:15:09.256 "assigned_rate_limits": { 00:15:09.256 "rw_ios_per_sec": 0, 00:15:09.256 "rw_mbytes_per_sec": 0, 00:15:09.256 "r_mbytes_per_sec": 0, 00:15:09.256 "w_mbytes_per_sec": 0 00:15:09.256 }, 00:15:09.256 "claimed": true, 00:15:09.256 "claim_type": "exclusive_write", 00:15:09.256 "zoned": false, 00:15:09.256 "supported_io_types": { 00:15:09.256 "read": true, 00:15:09.256 "write": true, 00:15:09.256 "unmap": true, 00:15:09.256 "flush": true, 00:15:09.256 "reset": true, 00:15:09.256 "nvme_admin": false, 00:15:09.256 "nvme_io": false, 00:15:09.256 "nvme_io_md": false, 00:15:09.256 "write_zeroes": true, 00:15:09.256 "zcopy": true, 00:15:09.256 "get_zone_info": false, 00:15:09.256 "zone_management": false, 00:15:09.256 "zone_append": false, 00:15:09.256 "compare": false, 00:15:09.256 "compare_and_write": false, 00:15:09.256 "abort": true, 00:15:09.256 "seek_hole": false, 00:15:09.256 "seek_data": false, 00:15:09.256 "copy": true, 00:15:09.256 "nvme_iov_md": false 00:15:09.256 }, 00:15:09.256 "memory_domains": [ 00:15:09.256 { 00:15:09.256 "dma_device_id": "system", 00:15:09.256 "dma_device_type": 1 00:15:09.256 }, 00:15:09.256 { 00:15:09.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.256 "dma_device_type": 2 00:15:09.256 } 00:15:09.256 ], 00:15:09.256 "driver_specific": {} 00:15:09.256 } 00:15:09.256 ] 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.256 "name": "Existed_Raid", 00:15:09.256 "uuid": "395c2feb-36e1-47c5-8ebb-2b986a528c02", 00:15:09.256 "strip_size_kb": 64, 00:15:09.256 "state": "configuring", 00:15:09.256 "raid_level": "raid5f", 00:15:09.256 "superblock": true, 00:15:09.256 "num_base_bdevs": 3, 00:15:09.256 "num_base_bdevs_discovered": 1, 00:15:09.256 "num_base_bdevs_operational": 3, 00:15:09.256 "base_bdevs_list": [ 00:15:09.256 { 00:15:09.256 "name": "BaseBdev1", 00:15:09.256 "uuid": "808dd98e-db63-4257-aeed-e4972ef75e9b", 00:15:09.256 "is_configured": true, 00:15:09.256 "data_offset": 2048, 00:15:09.256 "data_size": 63488 00:15:09.256 }, 00:15:09.256 { 00:15:09.256 "name": "BaseBdev2", 00:15:09.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.256 "is_configured": false, 00:15:09.256 "data_offset": 0, 00:15:09.256 "data_size": 0 00:15:09.256 }, 00:15:09.256 { 00:15:09.256 "name": "BaseBdev3", 00:15:09.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.256 "is_configured": false, 00:15:09.256 "data_offset": 0, 00:15:09.256 "data_size": 0 00:15:09.256 } 00:15:09.256 ] 00:15:09.256 }' 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.256 04:04:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.516 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:09.516 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.516 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.516 [2024-11-18 04:04:06.147995] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:09.516 [2024-11-18 04:04:06.148079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:09.516 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.516 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:09.516 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.516 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.776 [2024-11-18 04:04:06.160043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.776 [2024-11-18 04:04:06.161793] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.776 [2024-11-18 04:04:06.161843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.776 [2024-11-18 04:04:06.161854] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:09.776 [2024-11-18 04:04:06.161863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:09.776 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.776 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:09.776 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:09.776 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:09.776 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.776 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.776 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.777 "name": "Existed_Raid", 00:15:09.777 "uuid": "e7a84821-6fff-49bd-9f5c-3a95ac368172", 00:15:09.777 "strip_size_kb": 64, 00:15:09.777 "state": "configuring", 00:15:09.777 "raid_level": "raid5f", 00:15:09.777 "superblock": true, 00:15:09.777 "num_base_bdevs": 3, 00:15:09.777 "num_base_bdevs_discovered": 1, 00:15:09.777 "num_base_bdevs_operational": 3, 00:15:09.777 "base_bdevs_list": [ 00:15:09.777 { 00:15:09.777 "name": "BaseBdev1", 00:15:09.777 "uuid": "808dd98e-db63-4257-aeed-e4972ef75e9b", 00:15:09.777 "is_configured": true, 00:15:09.777 "data_offset": 2048, 00:15:09.777 "data_size": 63488 00:15:09.777 }, 00:15:09.777 { 00:15:09.777 "name": "BaseBdev2", 00:15:09.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.777 "is_configured": false, 00:15:09.777 "data_offset": 0, 00:15:09.777 "data_size": 0 00:15:09.777 }, 00:15:09.777 { 00:15:09.777 "name": "BaseBdev3", 00:15:09.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.777 "is_configured": false, 00:15:09.777 "data_offset": 0, 00:15:09.777 "data_size": 0 00:15:09.777 } 00:15:09.777 ] 00:15:09.777 }' 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.777 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.037 [2024-11-18 04:04:06.660556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.037 BaseBdev2 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.037 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.297 [ 00:15:10.297 { 00:15:10.297 "name": "BaseBdev2", 00:15:10.297 "aliases": [ 00:15:10.297 "c7331448-5b6a-4daa-ae48-3e56494b9562" 00:15:10.297 ], 00:15:10.297 "product_name": "Malloc disk", 00:15:10.297 "block_size": 512, 00:15:10.297 "num_blocks": 65536, 00:15:10.297 "uuid": "c7331448-5b6a-4daa-ae48-3e56494b9562", 00:15:10.297 "assigned_rate_limits": { 00:15:10.297 "rw_ios_per_sec": 0, 00:15:10.297 "rw_mbytes_per_sec": 0, 00:15:10.297 "r_mbytes_per_sec": 0, 00:15:10.297 "w_mbytes_per_sec": 0 00:15:10.297 }, 00:15:10.297 "claimed": true, 00:15:10.297 "claim_type": "exclusive_write", 00:15:10.297 "zoned": false, 00:15:10.297 "supported_io_types": { 00:15:10.297 "read": true, 00:15:10.297 "write": true, 00:15:10.297 "unmap": true, 00:15:10.297 "flush": true, 00:15:10.297 "reset": true, 00:15:10.297 "nvme_admin": false, 00:15:10.297 "nvme_io": false, 00:15:10.297 "nvme_io_md": false, 00:15:10.297 "write_zeroes": true, 00:15:10.297 "zcopy": true, 00:15:10.297 "get_zone_info": false, 00:15:10.297 "zone_management": false, 00:15:10.297 "zone_append": false, 00:15:10.297 "compare": false, 00:15:10.297 "compare_and_write": false, 00:15:10.297 "abort": true, 00:15:10.297 "seek_hole": false, 00:15:10.297 "seek_data": false, 00:15:10.297 "copy": true, 00:15:10.297 "nvme_iov_md": false 00:15:10.297 }, 00:15:10.297 "memory_domains": [ 00:15:10.297 { 00:15:10.297 "dma_device_id": "system", 00:15:10.297 "dma_device_type": 1 00:15:10.297 }, 00:15:10.297 { 00:15:10.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.297 "dma_device_type": 2 00:15:10.297 } 00:15:10.297 ], 00:15:10.297 "driver_specific": {} 00:15:10.297 } 00:15:10.297 ] 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.297 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.297 "name": "Existed_Raid", 00:15:10.297 "uuid": "e7a84821-6fff-49bd-9f5c-3a95ac368172", 00:15:10.298 "strip_size_kb": 64, 00:15:10.298 "state": "configuring", 00:15:10.298 "raid_level": "raid5f", 00:15:10.298 "superblock": true, 00:15:10.298 "num_base_bdevs": 3, 00:15:10.298 "num_base_bdevs_discovered": 2, 00:15:10.298 "num_base_bdevs_operational": 3, 00:15:10.298 "base_bdevs_list": [ 00:15:10.298 { 00:15:10.298 "name": "BaseBdev1", 00:15:10.298 "uuid": "808dd98e-db63-4257-aeed-e4972ef75e9b", 00:15:10.298 "is_configured": true, 00:15:10.298 "data_offset": 2048, 00:15:10.298 "data_size": 63488 00:15:10.298 }, 00:15:10.298 { 00:15:10.298 "name": "BaseBdev2", 00:15:10.298 "uuid": "c7331448-5b6a-4daa-ae48-3e56494b9562", 00:15:10.298 "is_configured": true, 00:15:10.298 "data_offset": 2048, 00:15:10.298 "data_size": 63488 00:15:10.298 }, 00:15:10.298 { 00:15:10.298 "name": "BaseBdev3", 00:15:10.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.298 "is_configured": false, 00:15:10.298 "data_offset": 0, 00:15:10.298 "data_size": 0 00:15:10.298 } 00:15:10.298 ] 00:15:10.298 }' 00:15:10.298 04:04:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.298 04:04:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.557 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:10.557 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.557 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.818 [2024-11-18 04:04:07.197322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.818 [2024-11-18 04:04:07.197580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:10.818 [2024-11-18 04:04:07.197602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:10.818 [2024-11-18 04:04:07.197880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:10.818 BaseBdev3 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.818 [2024-11-18 04:04:07.203525] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:10.818 [2024-11-18 04:04:07.203546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:10.818 [2024-11-18 04:04:07.203709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.818 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.818 [ 00:15:10.818 { 00:15:10.818 "name": "BaseBdev3", 00:15:10.818 "aliases": [ 00:15:10.818 "ef13d49a-1b5d-4c8f-8804-47ae460b981d" 00:15:10.818 ], 00:15:10.818 "product_name": "Malloc disk", 00:15:10.818 "block_size": 512, 00:15:10.818 "num_blocks": 65536, 00:15:10.818 "uuid": "ef13d49a-1b5d-4c8f-8804-47ae460b981d", 00:15:10.818 "assigned_rate_limits": { 00:15:10.818 "rw_ios_per_sec": 0, 00:15:10.818 "rw_mbytes_per_sec": 0, 00:15:10.818 "r_mbytes_per_sec": 0, 00:15:10.818 "w_mbytes_per_sec": 0 00:15:10.818 }, 00:15:10.818 "claimed": true, 00:15:10.818 "claim_type": "exclusive_write", 00:15:10.818 "zoned": false, 00:15:10.818 "supported_io_types": { 00:15:10.818 "read": true, 00:15:10.818 "write": true, 00:15:10.818 "unmap": true, 00:15:10.818 "flush": true, 00:15:10.818 "reset": true, 00:15:10.818 "nvme_admin": false, 00:15:10.818 "nvme_io": false, 00:15:10.818 "nvme_io_md": false, 00:15:10.818 "write_zeroes": true, 00:15:10.818 "zcopy": true, 00:15:10.818 "get_zone_info": false, 00:15:10.818 "zone_management": false, 00:15:10.818 "zone_append": false, 00:15:10.818 "compare": false, 00:15:10.818 "compare_and_write": false, 00:15:10.818 "abort": true, 00:15:10.818 "seek_hole": false, 00:15:10.818 "seek_data": false, 00:15:10.818 "copy": true, 00:15:10.818 "nvme_iov_md": false 00:15:10.818 }, 00:15:10.818 "memory_domains": [ 00:15:10.818 { 00:15:10.818 "dma_device_id": "system", 00:15:10.818 "dma_device_type": 1 00:15:10.818 }, 00:15:10.818 { 00:15:10.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.818 "dma_device_type": 2 00:15:10.818 } 00:15:10.818 ], 00:15:10.818 "driver_specific": {} 00:15:10.818 } 00:15:10.818 ] 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.819 "name": "Existed_Raid", 00:15:10.819 "uuid": "e7a84821-6fff-49bd-9f5c-3a95ac368172", 00:15:10.819 "strip_size_kb": 64, 00:15:10.819 "state": "online", 00:15:10.819 "raid_level": "raid5f", 00:15:10.819 "superblock": true, 00:15:10.819 "num_base_bdevs": 3, 00:15:10.819 "num_base_bdevs_discovered": 3, 00:15:10.819 "num_base_bdevs_operational": 3, 00:15:10.819 "base_bdevs_list": [ 00:15:10.819 { 00:15:10.819 "name": "BaseBdev1", 00:15:10.819 "uuid": "808dd98e-db63-4257-aeed-e4972ef75e9b", 00:15:10.819 "is_configured": true, 00:15:10.819 "data_offset": 2048, 00:15:10.819 "data_size": 63488 00:15:10.819 }, 00:15:10.819 { 00:15:10.819 "name": "BaseBdev2", 00:15:10.819 "uuid": "c7331448-5b6a-4daa-ae48-3e56494b9562", 00:15:10.819 "is_configured": true, 00:15:10.819 "data_offset": 2048, 00:15:10.819 "data_size": 63488 00:15:10.819 }, 00:15:10.819 { 00:15:10.819 "name": "BaseBdev3", 00:15:10.819 "uuid": "ef13d49a-1b5d-4c8f-8804-47ae460b981d", 00:15:10.819 "is_configured": true, 00:15:10.819 "data_offset": 2048, 00:15:10.819 "data_size": 63488 00:15:10.819 } 00:15:10.819 ] 00:15:10.819 }' 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.819 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.079 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:11.079 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:11.079 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:11.079 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:11.079 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:11.079 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:11.079 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:11.079 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:11.079 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.079 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.079 [2024-11-18 04:04:07.712786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.342 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.342 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:11.342 "name": "Existed_Raid", 00:15:11.342 "aliases": [ 00:15:11.342 "e7a84821-6fff-49bd-9f5c-3a95ac368172" 00:15:11.342 ], 00:15:11.342 "product_name": "Raid Volume", 00:15:11.342 "block_size": 512, 00:15:11.342 "num_blocks": 126976, 00:15:11.342 "uuid": "e7a84821-6fff-49bd-9f5c-3a95ac368172", 00:15:11.342 "assigned_rate_limits": { 00:15:11.342 "rw_ios_per_sec": 0, 00:15:11.342 "rw_mbytes_per_sec": 0, 00:15:11.342 "r_mbytes_per_sec": 0, 00:15:11.342 "w_mbytes_per_sec": 0 00:15:11.342 }, 00:15:11.342 "claimed": false, 00:15:11.342 "zoned": false, 00:15:11.342 "supported_io_types": { 00:15:11.342 "read": true, 00:15:11.342 "write": true, 00:15:11.342 "unmap": false, 00:15:11.342 "flush": false, 00:15:11.342 "reset": true, 00:15:11.342 "nvme_admin": false, 00:15:11.342 "nvme_io": false, 00:15:11.342 "nvme_io_md": false, 00:15:11.342 "write_zeroes": true, 00:15:11.342 "zcopy": false, 00:15:11.342 "get_zone_info": false, 00:15:11.342 "zone_management": false, 00:15:11.342 "zone_append": false, 00:15:11.342 "compare": false, 00:15:11.342 "compare_and_write": false, 00:15:11.342 "abort": false, 00:15:11.342 "seek_hole": false, 00:15:11.342 "seek_data": false, 00:15:11.342 "copy": false, 00:15:11.342 "nvme_iov_md": false 00:15:11.342 }, 00:15:11.342 "driver_specific": { 00:15:11.342 "raid": { 00:15:11.342 "uuid": "e7a84821-6fff-49bd-9f5c-3a95ac368172", 00:15:11.342 "strip_size_kb": 64, 00:15:11.342 "state": "online", 00:15:11.342 "raid_level": "raid5f", 00:15:11.342 "superblock": true, 00:15:11.342 "num_base_bdevs": 3, 00:15:11.342 "num_base_bdevs_discovered": 3, 00:15:11.342 "num_base_bdevs_operational": 3, 00:15:11.342 "base_bdevs_list": [ 00:15:11.342 { 00:15:11.342 "name": "BaseBdev1", 00:15:11.342 "uuid": "808dd98e-db63-4257-aeed-e4972ef75e9b", 00:15:11.342 "is_configured": true, 00:15:11.342 "data_offset": 2048, 00:15:11.342 "data_size": 63488 00:15:11.342 }, 00:15:11.342 { 00:15:11.342 "name": "BaseBdev2", 00:15:11.342 "uuid": "c7331448-5b6a-4daa-ae48-3e56494b9562", 00:15:11.342 "is_configured": true, 00:15:11.342 "data_offset": 2048, 00:15:11.342 "data_size": 63488 00:15:11.342 }, 00:15:11.342 { 00:15:11.342 "name": "BaseBdev3", 00:15:11.342 "uuid": "ef13d49a-1b5d-4c8f-8804-47ae460b981d", 00:15:11.342 "is_configured": true, 00:15:11.342 "data_offset": 2048, 00:15:11.342 "data_size": 63488 00:15:11.342 } 00:15:11.342 ] 00:15:11.342 } 00:15:11.342 } 00:15:11.342 }' 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:11.343 BaseBdev2 00:15:11.343 BaseBdev3' 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.343 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.614 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.614 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.614 04:04:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:11.614 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.614 04:04:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.614 [2024-11-18 04:04:07.992139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.614 "name": "Existed_Raid", 00:15:11.614 "uuid": "e7a84821-6fff-49bd-9f5c-3a95ac368172", 00:15:11.614 "strip_size_kb": 64, 00:15:11.614 "state": "online", 00:15:11.614 "raid_level": "raid5f", 00:15:11.614 "superblock": true, 00:15:11.614 "num_base_bdevs": 3, 00:15:11.614 "num_base_bdevs_discovered": 2, 00:15:11.614 "num_base_bdevs_operational": 2, 00:15:11.614 "base_bdevs_list": [ 00:15:11.614 { 00:15:11.614 "name": null, 00:15:11.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.614 "is_configured": false, 00:15:11.614 "data_offset": 0, 00:15:11.614 "data_size": 63488 00:15:11.614 }, 00:15:11.614 { 00:15:11.614 "name": "BaseBdev2", 00:15:11.614 "uuid": "c7331448-5b6a-4daa-ae48-3e56494b9562", 00:15:11.614 "is_configured": true, 00:15:11.614 "data_offset": 2048, 00:15:11.614 "data_size": 63488 00:15:11.614 }, 00:15:11.614 { 00:15:11.614 "name": "BaseBdev3", 00:15:11.614 "uuid": "ef13d49a-1b5d-4c8f-8804-47ae460b981d", 00:15:11.614 "is_configured": true, 00:15:11.614 "data_offset": 2048, 00:15:11.614 "data_size": 63488 00:15:11.614 } 00:15:11.614 ] 00:15:11.614 }' 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.614 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.888 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:11.888 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:11.888 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.888 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.888 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.888 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:11.888 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.148 [2024-11-18 04:04:08.538928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:12.148 [2024-11-18 04:04:08.539066] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.148 [2024-11-18 04:04:08.628568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.148 [2024-11-18 04:04:08.688488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:12.148 [2024-11-18 04:04:08.688531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:12.148 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:12.149 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:12.149 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.149 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.149 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.409 BaseBdev2 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.409 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.410 [ 00:15:12.410 { 00:15:12.410 "name": "BaseBdev2", 00:15:12.410 "aliases": [ 00:15:12.410 "a2f1f8b1-57cf-467a-b4ba-44c64757041b" 00:15:12.410 ], 00:15:12.410 "product_name": "Malloc disk", 00:15:12.410 "block_size": 512, 00:15:12.410 "num_blocks": 65536, 00:15:12.410 "uuid": "a2f1f8b1-57cf-467a-b4ba-44c64757041b", 00:15:12.410 "assigned_rate_limits": { 00:15:12.410 "rw_ios_per_sec": 0, 00:15:12.410 "rw_mbytes_per_sec": 0, 00:15:12.410 "r_mbytes_per_sec": 0, 00:15:12.410 "w_mbytes_per_sec": 0 00:15:12.410 }, 00:15:12.410 "claimed": false, 00:15:12.410 "zoned": false, 00:15:12.410 "supported_io_types": { 00:15:12.410 "read": true, 00:15:12.410 "write": true, 00:15:12.410 "unmap": true, 00:15:12.410 "flush": true, 00:15:12.410 "reset": true, 00:15:12.410 "nvme_admin": false, 00:15:12.410 "nvme_io": false, 00:15:12.410 "nvme_io_md": false, 00:15:12.410 "write_zeroes": true, 00:15:12.410 "zcopy": true, 00:15:12.410 "get_zone_info": false, 00:15:12.410 "zone_management": false, 00:15:12.410 "zone_append": false, 00:15:12.410 "compare": false, 00:15:12.410 "compare_and_write": false, 00:15:12.410 "abort": true, 00:15:12.410 "seek_hole": false, 00:15:12.410 "seek_data": false, 00:15:12.410 "copy": true, 00:15:12.410 "nvme_iov_md": false 00:15:12.410 }, 00:15:12.410 "memory_domains": [ 00:15:12.410 { 00:15:12.410 "dma_device_id": "system", 00:15:12.410 "dma_device_type": 1 00:15:12.410 }, 00:15:12.410 { 00:15:12.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.410 "dma_device_type": 2 00:15:12.410 } 00:15:12.410 ], 00:15:12.410 "driver_specific": {} 00:15:12.410 } 00:15:12.410 ] 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.410 BaseBdev3 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.410 [ 00:15:12.410 { 00:15:12.410 "name": "BaseBdev3", 00:15:12.410 "aliases": [ 00:15:12.410 "af36eb37-820a-4f8c-bdd3-fc9fad6df6f7" 00:15:12.410 ], 00:15:12.410 "product_name": "Malloc disk", 00:15:12.410 "block_size": 512, 00:15:12.410 "num_blocks": 65536, 00:15:12.410 "uuid": "af36eb37-820a-4f8c-bdd3-fc9fad6df6f7", 00:15:12.410 "assigned_rate_limits": { 00:15:12.410 "rw_ios_per_sec": 0, 00:15:12.410 "rw_mbytes_per_sec": 0, 00:15:12.410 "r_mbytes_per_sec": 0, 00:15:12.410 "w_mbytes_per_sec": 0 00:15:12.410 }, 00:15:12.410 "claimed": false, 00:15:12.410 "zoned": false, 00:15:12.410 "supported_io_types": { 00:15:12.410 "read": true, 00:15:12.410 "write": true, 00:15:12.410 "unmap": true, 00:15:12.410 "flush": true, 00:15:12.410 "reset": true, 00:15:12.410 "nvme_admin": false, 00:15:12.410 "nvme_io": false, 00:15:12.410 "nvme_io_md": false, 00:15:12.410 "write_zeroes": true, 00:15:12.410 "zcopy": true, 00:15:12.410 "get_zone_info": false, 00:15:12.410 "zone_management": false, 00:15:12.410 "zone_append": false, 00:15:12.410 "compare": false, 00:15:12.410 "compare_and_write": false, 00:15:12.410 "abort": true, 00:15:12.410 "seek_hole": false, 00:15:12.410 "seek_data": false, 00:15:12.410 "copy": true, 00:15:12.410 "nvme_iov_md": false 00:15:12.410 }, 00:15:12.410 "memory_domains": [ 00:15:12.410 { 00:15:12.410 "dma_device_id": "system", 00:15:12.410 "dma_device_type": 1 00:15:12.410 }, 00:15:12.410 { 00:15:12.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.410 "dma_device_type": 2 00:15:12.410 } 00:15:12.410 ], 00:15:12.410 "driver_specific": {} 00:15:12.410 } 00:15:12.410 ] 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.410 [2024-11-18 04:04:08.980261] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.410 [2024-11-18 04:04:08.980304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.410 [2024-11-18 04:04:08.980324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.410 [2024-11-18 04:04:08.982009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.410 04:04:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.410 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.410 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.410 "name": "Existed_Raid", 00:15:12.410 "uuid": "2204c05b-b7f1-4377-b20a-45f107eef58b", 00:15:12.410 "strip_size_kb": 64, 00:15:12.410 "state": "configuring", 00:15:12.410 "raid_level": "raid5f", 00:15:12.410 "superblock": true, 00:15:12.410 "num_base_bdevs": 3, 00:15:12.410 "num_base_bdevs_discovered": 2, 00:15:12.410 "num_base_bdevs_operational": 3, 00:15:12.410 "base_bdevs_list": [ 00:15:12.410 { 00:15:12.410 "name": "BaseBdev1", 00:15:12.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.410 "is_configured": false, 00:15:12.410 "data_offset": 0, 00:15:12.410 "data_size": 0 00:15:12.410 }, 00:15:12.410 { 00:15:12.410 "name": "BaseBdev2", 00:15:12.410 "uuid": "a2f1f8b1-57cf-467a-b4ba-44c64757041b", 00:15:12.410 "is_configured": true, 00:15:12.410 "data_offset": 2048, 00:15:12.410 "data_size": 63488 00:15:12.410 }, 00:15:12.410 { 00:15:12.410 "name": "BaseBdev3", 00:15:12.410 "uuid": "af36eb37-820a-4f8c-bdd3-fc9fad6df6f7", 00:15:12.410 "is_configured": true, 00:15:12.410 "data_offset": 2048, 00:15:12.410 "data_size": 63488 00:15:12.411 } 00:15:12.411 ] 00:15:12.411 }' 00:15:12.411 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.411 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.980 [2024-11-18 04:04:09.459944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.980 "name": "Existed_Raid", 00:15:12.980 "uuid": "2204c05b-b7f1-4377-b20a-45f107eef58b", 00:15:12.980 "strip_size_kb": 64, 00:15:12.980 "state": "configuring", 00:15:12.980 "raid_level": "raid5f", 00:15:12.980 "superblock": true, 00:15:12.980 "num_base_bdevs": 3, 00:15:12.980 "num_base_bdevs_discovered": 1, 00:15:12.980 "num_base_bdevs_operational": 3, 00:15:12.980 "base_bdevs_list": [ 00:15:12.980 { 00:15:12.980 "name": "BaseBdev1", 00:15:12.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.980 "is_configured": false, 00:15:12.980 "data_offset": 0, 00:15:12.980 "data_size": 0 00:15:12.980 }, 00:15:12.980 { 00:15:12.980 "name": null, 00:15:12.980 "uuid": "a2f1f8b1-57cf-467a-b4ba-44c64757041b", 00:15:12.980 "is_configured": false, 00:15:12.980 "data_offset": 0, 00:15:12.980 "data_size": 63488 00:15:12.980 }, 00:15:12.980 { 00:15:12.980 "name": "BaseBdev3", 00:15:12.980 "uuid": "af36eb37-820a-4f8c-bdd3-fc9fad6df6f7", 00:15:12.980 "is_configured": true, 00:15:12.980 "data_offset": 2048, 00:15:12.980 "data_size": 63488 00:15:12.980 } 00:15:12.980 ] 00:15:12.980 }' 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.980 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.550 [2024-11-18 04:04:09.990785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.550 BaseBdev1 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.550 04:04:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.550 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.550 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:13.550 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.550 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.550 [ 00:15:13.550 { 00:15:13.550 "name": "BaseBdev1", 00:15:13.550 "aliases": [ 00:15:13.550 "cf603dde-dee5-4d06-a7eb-2bf0600ffcd2" 00:15:13.550 ], 00:15:13.550 "product_name": "Malloc disk", 00:15:13.550 "block_size": 512, 00:15:13.550 "num_blocks": 65536, 00:15:13.550 "uuid": "cf603dde-dee5-4d06-a7eb-2bf0600ffcd2", 00:15:13.550 "assigned_rate_limits": { 00:15:13.550 "rw_ios_per_sec": 0, 00:15:13.550 "rw_mbytes_per_sec": 0, 00:15:13.550 "r_mbytes_per_sec": 0, 00:15:13.550 "w_mbytes_per_sec": 0 00:15:13.550 }, 00:15:13.550 "claimed": true, 00:15:13.550 "claim_type": "exclusive_write", 00:15:13.550 "zoned": false, 00:15:13.550 "supported_io_types": { 00:15:13.550 "read": true, 00:15:13.550 "write": true, 00:15:13.550 "unmap": true, 00:15:13.550 "flush": true, 00:15:13.550 "reset": true, 00:15:13.550 "nvme_admin": false, 00:15:13.550 "nvme_io": false, 00:15:13.550 "nvme_io_md": false, 00:15:13.550 "write_zeroes": true, 00:15:13.550 "zcopy": true, 00:15:13.550 "get_zone_info": false, 00:15:13.550 "zone_management": false, 00:15:13.550 "zone_append": false, 00:15:13.550 "compare": false, 00:15:13.550 "compare_and_write": false, 00:15:13.550 "abort": true, 00:15:13.550 "seek_hole": false, 00:15:13.551 "seek_data": false, 00:15:13.551 "copy": true, 00:15:13.551 "nvme_iov_md": false 00:15:13.551 }, 00:15:13.551 "memory_domains": [ 00:15:13.551 { 00:15:13.551 "dma_device_id": "system", 00:15:13.551 "dma_device_type": 1 00:15:13.551 }, 00:15:13.551 { 00:15:13.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.551 "dma_device_type": 2 00:15:13.551 } 00:15:13.551 ], 00:15:13.551 "driver_specific": {} 00:15:13.551 } 00:15:13.551 ] 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.551 "name": "Existed_Raid", 00:15:13.551 "uuid": "2204c05b-b7f1-4377-b20a-45f107eef58b", 00:15:13.551 "strip_size_kb": 64, 00:15:13.551 "state": "configuring", 00:15:13.551 "raid_level": "raid5f", 00:15:13.551 "superblock": true, 00:15:13.551 "num_base_bdevs": 3, 00:15:13.551 "num_base_bdevs_discovered": 2, 00:15:13.551 "num_base_bdevs_operational": 3, 00:15:13.551 "base_bdevs_list": [ 00:15:13.551 { 00:15:13.551 "name": "BaseBdev1", 00:15:13.551 "uuid": "cf603dde-dee5-4d06-a7eb-2bf0600ffcd2", 00:15:13.551 "is_configured": true, 00:15:13.551 "data_offset": 2048, 00:15:13.551 "data_size": 63488 00:15:13.551 }, 00:15:13.551 { 00:15:13.551 "name": null, 00:15:13.551 "uuid": "a2f1f8b1-57cf-467a-b4ba-44c64757041b", 00:15:13.551 "is_configured": false, 00:15:13.551 "data_offset": 0, 00:15:13.551 "data_size": 63488 00:15:13.551 }, 00:15:13.551 { 00:15:13.551 "name": "BaseBdev3", 00:15:13.551 "uuid": "af36eb37-820a-4f8c-bdd3-fc9fad6df6f7", 00:15:13.551 "is_configured": true, 00:15:13.551 "data_offset": 2048, 00:15:13.551 "data_size": 63488 00:15:13.551 } 00:15:13.551 ] 00:15:13.551 }' 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.551 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.120 [2024-11-18 04:04:10.517895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.120 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.120 "name": "Existed_Raid", 00:15:14.120 "uuid": "2204c05b-b7f1-4377-b20a-45f107eef58b", 00:15:14.120 "strip_size_kb": 64, 00:15:14.120 "state": "configuring", 00:15:14.120 "raid_level": "raid5f", 00:15:14.120 "superblock": true, 00:15:14.120 "num_base_bdevs": 3, 00:15:14.120 "num_base_bdevs_discovered": 1, 00:15:14.120 "num_base_bdevs_operational": 3, 00:15:14.120 "base_bdevs_list": [ 00:15:14.120 { 00:15:14.120 "name": "BaseBdev1", 00:15:14.121 "uuid": "cf603dde-dee5-4d06-a7eb-2bf0600ffcd2", 00:15:14.121 "is_configured": true, 00:15:14.121 "data_offset": 2048, 00:15:14.121 "data_size": 63488 00:15:14.121 }, 00:15:14.121 { 00:15:14.121 "name": null, 00:15:14.121 "uuid": "a2f1f8b1-57cf-467a-b4ba-44c64757041b", 00:15:14.121 "is_configured": false, 00:15:14.121 "data_offset": 0, 00:15:14.121 "data_size": 63488 00:15:14.121 }, 00:15:14.121 { 00:15:14.121 "name": null, 00:15:14.121 "uuid": "af36eb37-820a-4f8c-bdd3-fc9fad6df6f7", 00:15:14.121 "is_configured": false, 00:15:14.121 "data_offset": 0, 00:15:14.121 "data_size": 63488 00:15:14.121 } 00:15:14.121 ] 00:15:14.121 }' 00:15:14.121 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.121 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.381 [2024-11-18 04:04:10.945195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.381 "name": "Existed_Raid", 00:15:14.381 "uuid": "2204c05b-b7f1-4377-b20a-45f107eef58b", 00:15:14.381 "strip_size_kb": 64, 00:15:14.381 "state": "configuring", 00:15:14.381 "raid_level": "raid5f", 00:15:14.381 "superblock": true, 00:15:14.381 "num_base_bdevs": 3, 00:15:14.381 "num_base_bdevs_discovered": 2, 00:15:14.381 "num_base_bdevs_operational": 3, 00:15:14.381 "base_bdevs_list": [ 00:15:14.381 { 00:15:14.381 "name": "BaseBdev1", 00:15:14.381 "uuid": "cf603dde-dee5-4d06-a7eb-2bf0600ffcd2", 00:15:14.381 "is_configured": true, 00:15:14.381 "data_offset": 2048, 00:15:14.381 "data_size": 63488 00:15:14.381 }, 00:15:14.381 { 00:15:14.381 "name": null, 00:15:14.381 "uuid": "a2f1f8b1-57cf-467a-b4ba-44c64757041b", 00:15:14.381 "is_configured": false, 00:15:14.381 "data_offset": 0, 00:15:14.381 "data_size": 63488 00:15:14.381 }, 00:15:14.381 { 00:15:14.381 "name": "BaseBdev3", 00:15:14.381 "uuid": "af36eb37-820a-4f8c-bdd3-fc9fad6df6f7", 00:15:14.381 "is_configured": true, 00:15:14.381 "data_offset": 2048, 00:15:14.381 "data_size": 63488 00:15:14.381 } 00:15:14.381 ] 00:15:14.381 }' 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.381 04:04:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.950 [2024-11-18 04:04:11.412394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.950 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.951 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.951 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.951 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.951 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.951 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.951 "name": "Existed_Raid", 00:15:14.951 "uuid": "2204c05b-b7f1-4377-b20a-45f107eef58b", 00:15:14.951 "strip_size_kb": 64, 00:15:14.951 "state": "configuring", 00:15:14.951 "raid_level": "raid5f", 00:15:14.951 "superblock": true, 00:15:14.951 "num_base_bdevs": 3, 00:15:14.951 "num_base_bdevs_discovered": 1, 00:15:14.951 "num_base_bdevs_operational": 3, 00:15:14.951 "base_bdevs_list": [ 00:15:14.951 { 00:15:14.951 "name": null, 00:15:14.951 "uuid": "cf603dde-dee5-4d06-a7eb-2bf0600ffcd2", 00:15:14.951 "is_configured": false, 00:15:14.951 "data_offset": 0, 00:15:14.951 "data_size": 63488 00:15:14.951 }, 00:15:14.951 { 00:15:14.951 "name": null, 00:15:14.951 "uuid": "a2f1f8b1-57cf-467a-b4ba-44c64757041b", 00:15:14.951 "is_configured": false, 00:15:14.951 "data_offset": 0, 00:15:14.951 "data_size": 63488 00:15:14.951 }, 00:15:14.951 { 00:15:14.951 "name": "BaseBdev3", 00:15:14.951 "uuid": "af36eb37-820a-4f8c-bdd3-fc9fad6df6f7", 00:15:14.951 "is_configured": true, 00:15:14.951 "data_offset": 2048, 00:15:14.951 "data_size": 63488 00:15:14.951 } 00:15:14.951 ] 00:15:14.951 }' 00:15:14.951 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.951 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.521 [2024-11-18 04:04:11.973073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.521 04:04:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.521 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.521 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.521 "name": "Existed_Raid", 00:15:15.521 "uuid": "2204c05b-b7f1-4377-b20a-45f107eef58b", 00:15:15.521 "strip_size_kb": 64, 00:15:15.521 "state": "configuring", 00:15:15.521 "raid_level": "raid5f", 00:15:15.521 "superblock": true, 00:15:15.521 "num_base_bdevs": 3, 00:15:15.521 "num_base_bdevs_discovered": 2, 00:15:15.521 "num_base_bdevs_operational": 3, 00:15:15.521 "base_bdevs_list": [ 00:15:15.521 { 00:15:15.521 "name": null, 00:15:15.521 "uuid": "cf603dde-dee5-4d06-a7eb-2bf0600ffcd2", 00:15:15.521 "is_configured": false, 00:15:15.521 "data_offset": 0, 00:15:15.521 "data_size": 63488 00:15:15.521 }, 00:15:15.521 { 00:15:15.521 "name": "BaseBdev2", 00:15:15.521 "uuid": "a2f1f8b1-57cf-467a-b4ba-44c64757041b", 00:15:15.521 "is_configured": true, 00:15:15.521 "data_offset": 2048, 00:15:15.521 "data_size": 63488 00:15:15.521 }, 00:15:15.521 { 00:15:15.521 "name": "BaseBdev3", 00:15:15.521 "uuid": "af36eb37-820a-4f8c-bdd3-fc9fad6df6f7", 00:15:15.521 "is_configured": true, 00:15:15.521 "data_offset": 2048, 00:15:15.521 "data_size": 63488 00:15:15.521 } 00:15:15.521 ] 00:15:15.521 }' 00:15:15.521 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.521 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.780 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:15.780 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.780 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.780 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.780 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.780 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:15.780 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.780 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.780 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.780 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cf603dde-dee5-4d06-a7eb-2bf0600ffcd2 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.040 [2024-11-18 04:04:12.487127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:16.040 [2024-11-18 04:04:12.487367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:16.040 [2024-11-18 04:04:12.487383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:16.040 [2024-11-18 04:04:12.487629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:16.040 NewBaseBdev 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.040 [2024-11-18 04:04:12.493161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:16.040 [2024-11-18 04:04:12.493184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:16.040 [2024-11-18 04:04:12.493341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.040 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.040 [ 00:15:16.040 { 00:15:16.040 "name": "NewBaseBdev", 00:15:16.040 "aliases": [ 00:15:16.040 "cf603dde-dee5-4d06-a7eb-2bf0600ffcd2" 00:15:16.040 ], 00:15:16.040 "product_name": "Malloc disk", 00:15:16.040 "block_size": 512, 00:15:16.040 "num_blocks": 65536, 00:15:16.040 "uuid": "cf603dde-dee5-4d06-a7eb-2bf0600ffcd2", 00:15:16.040 "assigned_rate_limits": { 00:15:16.040 "rw_ios_per_sec": 0, 00:15:16.040 "rw_mbytes_per_sec": 0, 00:15:16.040 "r_mbytes_per_sec": 0, 00:15:16.040 "w_mbytes_per_sec": 0 00:15:16.040 }, 00:15:16.040 "claimed": true, 00:15:16.040 "claim_type": "exclusive_write", 00:15:16.040 "zoned": false, 00:15:16.040 "supported_io_types": { 00:15:16.040 "read": true, 00:15:16.040 "write": true, 00:15:16.040 "unmap": true, 00:15:16.040 "flush": true, 00:15:16.040 "reset": true, 00:15:16.040 "nvme_admin": false, 00:15:16.040 "nvme_io": false, 00:15:16.040 "nvme_io_md": false, 00:15:16.040 "write_zeroes": true, 00:15:16.040 "zcopy": true, 00:15:16.040 "get_zone_info": false, 00:15:16.040 "zone_management": false, 00:15:16.040 "zone_append": false, 00:15:16.040 "compare": false, 00:15:16.040 "compare_and_write": false, 00:15:16.040 "abort": true, 00:15:16.040 "seek_hole": false, 00:15:16.040 "seek_data": false, 00:15:16.040 "copy": true, 00:15:16.040 "nvme_iov_md": false 00:15:16.040 }, 00:15:16.040 "memory_domains": [ 00:15:16.040 { 00:15:16.040 "dma_device_id": "system", 00:15:16.041 "dma_device_type": 1 00:15:16.041 }, 00:15:16.041 { 00:15:16.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.041 "dma_device_type": 2 00:15:16.041 } 00:15:16.041 ], 00:15:16.041 "driver_specific": {} 00:15:16.041 } 00:15:16.041 ] 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.041 "name": "Existed_Raid", 00:15:16.041 "uuid": "2204c05b-b7f1-4377-b20a-45f107eef58b", 00:15:16.041 "strip_size_kb": 64, 00:15:16.041 "state": "online", 00:15:16.041 "raid_level": "raid5f", 00:15:16.041 "superblock": true, 00:15:16.041 "num_base_bdevs": 3, 00:15:16.041 "num_base_bdevs_discovered": 3, 00:15:16.041 "num_base_bdevs_operational": 3, 00:15:16.041 "base_bdevs_list": [ 00:15:16.041 { 00:15:16.041 "name": "NewBaseBdev", 00:15:16.041 "uuid": "cf603dde-dee5-4d06-a7eb-2bf0600ffcd2", 00:15:16.041 "is_configured": true, 00:15:16.041 "data_offset": 2048, 00:15:16.041 "data_size": 63488 00:15:16.041 }, 00:15:16.041 { 00:15:16.041 "name": "BaseBdev2", 00:15:16.041 "uuid": "a2f1f8b1-57cf-467a-b4ba-44c64757041b", 00:15:16.041 "is_configured": true, 00:15:16.041 "data_offset": 2048, 00:15:16.041 "data_size": 63488 00:15:16.041 }, 00:15:16.041 { 00:15:16.041 "name": "BaseBdev3", 00:15:16.041 "uuid": "af36eb37-820a-4f8c-bdd3-fc9fad6df6f7", 00:15:16.041 "is_configured": true, 00:15:16.041 "data_offset": 2048, 00:15:16.041 "data_size": 63488 00:15:16.041 } 00:15:16.041 ] 00:15:16.041 }' 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.041 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.611 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:16.611 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:16.611 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:16.611 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:16.611 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:16.611 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:16.611 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:16.611 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.611 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.611 04:04:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:16.611 [2024-11-18 04:04:12.966792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.611 04:04:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.611 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:16.611 "name": "Existed_Raid", 00:15:16.611 "aliases": [ 00:15:16.611 "2204c05b-b7f1-4377-b20a-45f107eef58b" 00:15:16.611 ], 00:15:16.611 "product_name": "Raid Volume", 00:15:16.611 "block_size": 512, 00:15:16.611 "num_blocks": 126976, 00:15:16.611 "uuid": "2204c05b-b7f1-4377-b20a-45f107eef58b", 00:15:16.611 "assigned_rate_limits": { 00:15:16.611 "rw_ios_per_sec": 0, 00:15:16.611 "rw_mbytes_per_sec": 0, 00:15:16.611 "r_mbytes_per_sec": 0, 00:15:16.611 "w_mbytes_per_sec": 0 00:15:16.611 }, 00:15:16.611 "claimed": false, 00:15:16.611 "zoned": false, 00:15:16.611 "supported_io_types": { 00:15:16.611 "read": true, 00:15:16.611 "write": true, 00:15:16.611 "unmap": false, 00:15:16.611 "flush": false, 00:15:16.611 "reset": true, 00:15:16.611 "nvme_admin": false, 00:15:16.611 "nvme_io": false, 00:15:16.611 "nvme_io_md": false, 00:15:16.611 "write_zeroes": true, 00:15:16.611 "zcopy": false, 00:15:16.611 "get_zone_info": false, 00:15:16.611 "zone_management": false, 00:15:16.611 "zone_append": false, 00:15:16.611 "compare": false, 00:15:16.611 "compare_and_write": false, 00:15:16.611 "abort": false, 00:15:16.611 "seek_hole": false, 00:15:16.611 "seek_data": false, 00:15:16.611 "copy": false, 00:15:16.611 "nvme_iov_md": false 00:15:16.611 }, 00:15:16.611 "driver_specific": { 00:15:16.611 "raid": { 00:15:16.611 "uuid": "2204c05b-b7f1-4377-b20a-45f107eef58b", 00:15:16.611 "strip_size_kb": 64, 00:15:16.611 "state": "online", 00:15:16.611 "raid_level": "raid5f", 00:15:16.612 "superblock": true, 00:15:16.612 "num_base_bdevs": 3, 00:15:16.612 "num_base_bdevs_discovered": 3, 00:15:16.612 "num_base_bdevs_operational": 3, 00:15:16.612 "base_bdevs_list": [ 00:15:16.612 { 00:15:16.612 "name": "NewBaseBdev", 00:15:16.612 "uuid": "cf603dde-dee5-4d06-a7eb-2bf0600ffcd2", 00:15:16.612 "is_configured": true, 00:15:16.612 "data_offset": 2048, 00:15:16.612 "data_size": 63488 00:15:16.612 }, 00:15:16.612 { 00:15:16.612 "name": "BaseBdev2", 00:15:16.612 "uuid": "a2f1f8b1-57cf-467a-b4ba-44c64757041b", 00:15:16.612 "is_configured": true, 00:15:16.612 "data_offset": 2048, 00:15:16.612 "data_size": 63488 00:15:16.612 }, 00:15:16.612 { 00:15:16.612 "name": "BaseBdev3", 00:15:16.612 "uuid": "af36eb37-820a-4f8c-bdd3-fc9fad6df6f7", 00:15:16.612 "is_configured": true, 00:15:16.612 "data_offset": 2048, 00:15:16.612 "data_size": 63488 00:15:16.612 } 00:15:16.612 ] 00:15:16.612 } 00:15:16.612 } 00:15:16.612 }' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:16.612 BaseBdev2 00:15:16.612 BaseBdev3' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.612 [2024-11-18 04:04:13.222163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:16.612 [2024-11-18 04:04:13.222191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.612 [2024-11-18 04:04:13.222256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.612 [2024-11-18 04:04:13.222516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.612 [2024-11-18 04:04:13.222536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80419 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80419 ']' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80419 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.612 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80419 00:15:16.872 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.872 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.872 killing process with pid 80419 00:15:16.872 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80419' 00:15:16.872 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80419 00:15:16.872 [2024-11-18 04:04:13.269117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.872 04:04:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80419 00:15:17.132 [2024-11-18 04:04:13.546424] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.071 04:04:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:18.071 00:15:18.071 real 0m10.290s 00:15:18.071 user 0m16.445s 00:15:18.071 sys 0m1.820s 00:15:18.071 04:04:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.071 04:04:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.071 ************************************ 00:15:18.071 END TEST raid5f_state_function_test_sb 00:15:18.071 ************************************ 00:15:18.071 04:04:14 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:18.071 04:04:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:18.071 04:04:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.071 04:04:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.071 ************************************ 00:15:18.071 START TEST raid5f_superblock_test 00:15:18.071 ************************************ 00:15:18.071 04:04:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:18.071 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:18.071 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:18.071 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:18.071 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:18.071 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:18.071 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:18.071 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81034 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81034 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81034 ']' 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.072 04:04:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.332 [2024-11-18 04:04:14.734031] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:18.332 [2024-11-18 04:04:14.734153] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81034 ] 00:15:18.332 [2024-11-18 04:04:14.906387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.592 [2024-11-18 04:04:15.010925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.592 [2024-11-18 04:04:15.205006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.592 [2024-11-18 04:04:15.205046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.163 malloc1 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.163 [2024-11-18 04:04:15.589227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:19.163 [2024-11-18 04:04:15.589299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.163 [2024-11-18 04:04:15.589318] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:19.163 [2024-11-18 04:04:15.589328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.163 [2024-11-18 04:04:15.591299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.163 [2024-11-18 04:04:15.591330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:19.163 pt1 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.163 malloc2 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.163 [2024-11-18 04:04:15.642560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:19.163 [2024-11-18 04:04:15.642620] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.163 [2024-11-18 04:04:15.642640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:19.163 [2024-11-18 04:04:15.642648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.163 [2024-11-18 04:04:15.644596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.163 [2024-11-18 04:04:15.644628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:19.163 pt2 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:19.163 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.164 malloc3 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.164 [2024-11-18 04:04:15.717421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:19.164 [2024-11-18 04:04:15.717478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.164 [2024-11-18 04:04:15.717494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:19.164 [2024-11-18 04:04:15.717502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.164 [2024-11-18 04:04:15.719422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.164 [2024-11-18 04:04:15.719453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:19.164 pt3 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.164 [2024-11-18 04:04:15.729451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:19.164 [2024-11-18 04:04:15.731173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:19.164 [2024-11-18 04:04:15.731259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:19.164 [2024-11-18 04:04:15.731407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:19.164 [2024-11-18 04:04:15.731423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:19.164 [2024-11-18 04:04:15.731648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:19.164 [2024-11-18 04:04:15.737349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:19.164 [2024-11-18 04:04:15.737370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:19.164 [2024-11-18 04:04:15.737543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.164 "name": "raid_bdev1", 00:15:19.164 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:19.164 "strip_size_kb": 64, 00:15:19.164 "state": "online", 00:15:19.164 "raid_level": "raid5f", 00:15:19.164 "superblock": true, 00:15:19.164 "num_base_bdevs": 3, 00:15:19.164 "num_base_bdevs_discovered": 3, 00:15:19.164 "num_base_bdevs_operational": 3, 00:15:19.164 "base_bdevs_list": [ 00:15:19.164 { 00:15:19.164 "name": "pt1", 00:15:19.164 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:19.164 "is_configured": true, 00:15:19.164 "data_offset": 2048, 00:15:19.164 "data_size": 63488 00:15:19.164 }, 00:15:19.164 { 00:15:19.164 "name": "pt2", 00:15:19.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.164 "is_configured": true, 00:15:19.164 "data_offset": 2048, 00:15:19.164 "data_size": 63488 00:15:19.164 }, 00:15:19.164 { 00:15:19.164 "name": "pt3", 00:15:19.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.164 "is_configured": true, 00:15:19.164 "data_offset": 2048, 00:15:19.164 "data_size": 63488 00:15:19.164 } 00:15:19.164 ] 00:15:19.164 }' 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.164 04:04:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.735 [2024-11-18 04:04:16.167230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:19.735 "name": "raid_bdev1", 00:15:19.735 "aliases": [ 00:15:19.735 "9ce88c23-aa08-4406-b09e-0da96f0668c2" 00:15:19.735 ], 00:15:19.735 "product_name": "Raid Volume", 00:15:19.735 "block_size": 512, 00:15:19.735 "num_blocks": 126976, 00:15:19.735 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:19.735 "assigned_rate_limits": { 00:15:19.735 "rw_ios_per_sec": 0, 00:15:19.735 "rw_mbytes_per_sec": 0, 00:15:19.735 "r_mbytes_per_sec": 0, 00:15:19.735 "w_mbytes_per_sec": 0 00:15:19.735 }, 00:15:19.735 "claimed": false, 00:15:19.735 "zoned": false, 00:15:19.735 "supported_io_types": { 00:15:19.735 "read": true, 00:15:19.735 "write": true, 00:15:19.735 "unmap": false, 00:15:19.735 "flush": false, 00:15:19.735 "reset": true, 00:15:19.735 "nvme_admin": false, 00:15:19.735 "nvme_io": false, 00:15:19.735 "nvme_io_md": false, 00:15:19.735 "write_zeroes": true, 00:15:19.735 "zcopy": false, 00:15:19.735 "get_zone_info": false, 00:15:19.735 "zone_management": false, 00:15:19.735 "zone_append": false, 00:15:19.735 "compare": false, 00:15:19.735 "compare_and_write": false, 00:15:19.735 "abort": false, 00:15:19.735 "seek_hole": false, 00:15:19.735 "seek_data": false, 00:15:19.735 "copy": false, 00:15:19.735 "nvme_iov_md": false 00:15:19.735 }, 00:15:19.735 "driver_specific": { 00:15:19.735 "raid": { 00:15:19.735 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:19.735 "strip_size_kb": 64, 00:15:19.735 "state": "online", 00:15:19.735 "raid_level": "raid5f", 00:15:19.735 "superblock": true, 00:15:19.735 "num_base_bdevs": 3, 00:15:19.735 "num_base_bdevs_discovered": 3, 00:15:19.735 "num_base_bdevs_operational": 3, 00:15:19.735 "base_bdevs_list": [ 00:15:19.735 { 00:15:19.735 "name": "pt1", 00:15:19.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:19.735 "is_configured": true, 00:15:19.735 "data_offset": 2048, 00:15:19.735 "data_size": 63488 00:15:19.735 }, 00:15:19.735 { 00:15:19.735 "name": "pt2", 00:15:19.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.735 "is_configured": true, 00:15:19.735 "data_offset": 2048, 00:15:19.735 "data_size": 63488 00:15:19.735 }, 00:15:19.735 { 00:15:19.735 "name": "pt3", 00:15:19.735 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.735 "is_configured": true, 00:15:19.735 "data_offset": 2048, 00:15:19.735 "data_size": 63488 00:15:19.735 } 00:15:19.735 ] 00:15:19.735 } 00:15:19.735 } 00:15:19.735 }' 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:19.735 pt2 00:15:19.735 pt3' 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.735 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.996 [2024-11-18 04:04:16.450701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9ce88c23-aa08-4406-b09e-0da96f0668c2 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9ce88c23-aa08-4406-b09e-0da96f0668c2 ']' 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.996 [2024-11-18 04:04:16.494462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.996 [2024-11-18 04:04:16.494489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.996 [2024-11-18 04:04:16.494551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.996 [2024-11-18 04:04:16.494617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.996 [2024-11-18 04:04:16.494626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:19.996 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.997 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.257 [2024-11-18 04:04:16.638257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:20.257 [2024-11-18 04:04:16.640017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:20.257 [2024-11-18 04:04:16.640069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:20.257 [2024-11-18 04:04:16.640114] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:20.257 [2024-11-18 04:04:16.640147] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:20.257 [2024-11-18 04:04:16.640164] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:20.257 [2024-11-18 04:04:16.640179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.257 [2024-11-18 04:04:16.640188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:20.257 request: 00:15:20.257 { 00:15:20.257 "name": "raid_bdev1", 00:15:20.257 "raid_level": "raid5f", 00:15:20.257 "base_bdevs": [ 00:15:20.257 "malloc1", 00:15:20.257 "malloc2", 00:15:20.257 "malloc3" 00:15:20.257 ], 00:15:20.257 "strip_size_kb": 64, 00:15:20.257 "superblock": false, 00:15:20.257 "method": "bdev_raid_create", 00:15:20.257 "req_id": 1 00:15:20.257 } 00:15:20.257 Got JSON-RPC error response 00:15:20.257 response: 00:15:20.257 { 00:15:20.257 "code": -17, 00:15:20.257 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:20.257 } 00:15:20.257 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:20.257 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:20.257 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.257 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.258 [2024-11-18 04:04:16.702100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.258 [2024-11-18 04:04:16.702139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.258 [2024-11-18 04:04:16.702155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:20.258 [2024-11-18 04:04:16.702163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.258 [2024-11-18 04:04:16.704182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.258 [2024-11-18 04:04:16.704214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.258 [2024-11-18 04:04:16.704311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:20.258 [2024-11-18 04:04:16.704359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:20.258 pt1 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.258 "name": "raid_bdev1", 00:15:20.258 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:20.258 "strip_size_kb": 64, 00:15:20.258 "state": "configuring", 00:15:20.258 "raid_level": "raid5f", 00:15:20.258 "superblock": true, 00:15:20.258 "num_base_bdevs": 3, 00:15:20.258 "num_base_bdevs_discovered": 1, 00:15:20.258 "num_base_bdevs_operational": 3, 00:15:20.258 "base_bdevs_list": [ 00:15:20.258 { 00:15:20.258 "name": "pt1", 00:15:20.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:20.258 "is_configured": true, 00:15:20.258 "data_offset": 2048, 00:15:20.258 "data_size": 63488 00:15:20.258 }, 00:15:20.258 { 00:15:20.258 "name": null, 00:15:20.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.258 "is_configured": false, 00:15:20.258 "data_offset": 2048, 00:15:20.258 "data_size": 63488 00:15:20.258 }, 00:15:20.258 { 00:15:20.258 "name": null, 00:15:20.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.258 "is_configured": false, 00:15:20.258 "data_offset": 2048, 00:15:20.258 "data_size": 63488 00:15:20.258 } 00:15:20.258 ] 00:15:20.258 }' 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.258 04:04:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.518 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:20.518 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:20.518 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.518 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.518 [2024-11-18 04:04:17.149325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:20.518 [2024-11-18 04:04:17.149371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.518 [2024-11-18 04:04:17.149389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:20.518 [2024-11-18 04:04:17.149397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.518 [2024-11-18 04:04:17.149769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.518 [2024-11-18 04:04:17.149791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:20.518 [2024-11-18 04:04:17.149875] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:20.518 [2024-11-18 04:04:17.149894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:20.518 pt2 00:15:20.518 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.518 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:20.518 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.518 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.778 [2024-11-18 04:04:17.161322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.778 "name": "raid_bdev1", 00:15:20.778 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:20.778 "strip_size_kb": 64, 00:15:20.778 "state": "configuring", 00:15:20.778 "raid_level": "raid5f", 00:15:20.778 "superblock": true, 00:15:20.778 "num_base_bdevs": 3, 00:15:20.778 "num_base_bdevs_discovered": 1, 00:15:20.778 "num_base_bdevs_operational": 3, 00:15:20.778 "base_bdevs_list": [ 00:15:20.778 { 00:15:20.778 "name": "pt1", 00:15:20.778 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:20.778 "is_configured": true, 00:15:20.778 "data_offset": 2048, 00:15:20.778 "data_size": 63488 00:15:20.778 }, 00:15:20.778 { 00:15:20.778 "name": null, 00:15:20.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.778 "is_configured": false, 00:15:20.778 "data_offset": 0, 00:15:20.778 "data_size": 63488 00:15:20.778 }, 00:15:20.778 { 00:15:20.778 "name": null, 00:15:20.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.778 "is_configured": false, 00:15:20.778 "data_offset": 2048, 00:15:20.778 "data_size": 63488 00:15:20.778 } 00:15:20.778 ] 00:15:20.778 }' 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.778 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.038 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:21.038 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:21.038 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:21.038 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.038 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.038 [2024-11-18 04:04:17.620525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:21.038 [2024-11-18 04:04:17.620581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.038 [2024-11-18 04:04:17.620598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:21.038 [2024-11-18 04:04:17.620608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.038 [2024-11-18 04:04:17.621019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.038 [2024-11-18 04:04:17.621039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:21.038 [2024-11-18 04:04:17.621109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:21.038 [2024-11-18 04:04:17.621131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.038 pt2 00:15:21.038 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.038 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:21.038 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:21.038 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.039 [2024-11-18 04:04:17.632491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:21.039 [2024-11-18 04:04:17.632535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.039 [2024-11-18 04:04:17.632547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:21.039 [2024-11-18 04:04:17.632556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.039 [2024-11-18 04:04:17.632924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.039 [2024-11-18 04:04:17.632949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:21.039 [2024-11-18 04:04:17.633011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:21.039 [2024-11-18 04:04:17.633046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:21.039 [2024-11-18 04:04:17.633148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:21.039 [2024-11-18 04:04:17.633159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:21.039 [2024-11-18 04:04:17.633377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:21.039 [2024-11-18 04:04:17.638734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:21.039 [2024-11-18 04:04:17.638757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:21.039 [2024-11-18 04:04:17.638929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.039 pt3 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.039 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.298 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.298 "name": "raid_bdev1", 00:15:21.298 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:21.298 "strip_size_kb": 64, 00:15:21.298 "state": "online", 00:15:21.298 "raid_level": "raid5f", 00:15:21.298 "superblock": true, 00:15:21.298 "num_base_bdevs": 3, 00:15:21.299 "num_base_bdevs_discovered": 3, 00:15:21.299 "num_base_bdevs_operational": 3, 00:15:21.299 "base_bdevs_list": [ 00:15:21.299 { 00:15:21.299 "name": "pt1", 00:15:21.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:21.299 "is_configured": true, 00:15:21.299 "data_offset": 2048, 00:15:21.299 "data_size": 63488 00:15:21.299 }, 00:15:21.299 { 00:15:21.299 "name": "pt2", 00:15:21.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.299 "is_configured": true, 00:15:21.299 "data_offset": 2048, 00:15:21.299 "data_size": 63488 00:15:21.299 }, 00:15:21.299 { 00:15:21.299 "name": "pt3", 00:15:21.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:21.299 "is_configured": true, 00:15:21.299 "data_offset": 2048, 00:15:21.299 "data_size": 63488 00:15:21.299 } 00:15:21.299 ] 00:15:21.299 }' 00:15:21.299 04:04:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.299 04:04:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.559 [2024-11-18 04:04:18.108484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:21.559 "name": "raid_bdev1", 00:15:21.559 "aliases": [ 00:15:21.559 "9ce88c23-aa08-4406-b09e-0da96f0668c2" 00:15:21.559 ], 00:15:21.559 "product_name": "Raid Volume", 00:15:21.559 "block_size": 512, 00:15:21.559 "num_blocks": 126976, 00:15:21.559 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:21.559 "assigned_rate_limits": { 00:15:21.559 "rw_ios_per_sec": 0, 00:15:21.559 "rw_mbytes_per_sec": 0, 00:15:21.559 "r_mbytes_per_sec": 0, 00:15:21.559 "w_mbytes_per_sec": 0 00:15:21.559 }, 00:15:21.559 "claimed": false, 00:15:21.559 "zoned": false, 00:15:21.559 "supported_io_types": { 00:15:21.559 "read": true, 00:15:21.559 "write": true, 00:15:21.559 "unmap": false, 00:15:21.559 "flush": false, 00:15:21.559 "reset": true, 00:15:21.559 "nvme_admin": false, 00:15:21.559 "nvme_io": false, 00:15:21.559 "nvme_io_md": false, 00:15:21.559 "write_zeroes": true, 00:15:21.559 "zcopy": false, 00:15:21.559 "get_zone_info": false, 00:15:21.559 "zone_management": false, 00:15:21.559 "zone_append": false, 00:15:21.559 "compare": false, 00:15:21.559 "compare_and_write": false, 00:15:21.559 "abort": false, 00:15:21.559 "seek_hole": false, 00:15:21.559 "seek_data": false, 00:15:21.559 "copy": false, 00:15:21.559 "nvme_iov_md": false 00:15:21.559 }, 00:15:21.559 "driver_specific": { 00:15:21.559 "raid": { 00:15:21.559 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:21.559 "strip_size_kb": 64, 00:15:21.559 "state": "online", 00:15:21.559 "raid_level": "raid5f", 00:15:21.559 "superblock": true, 00:15:21.559 "num_base_bdevs": 3, 00:15:21.559 "num_base_bdevs_discovered": 3, 00:15:21.559 "num_base_bdevs_operational": 3, 00:15:21.559 "base_bdevs_list": [ 00:15:21.559 { 00:15:21.559 "name": "pt1", 00:15:21.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:21.559 "is_configured": true, 00:15:21.559 "data_offset": 2048, 00:15:21.559 "data_size": 63488 00:15:21.559 }, 00:15:21.559 { 00:15:21.559 "name": "pt2", 00:15:21.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.559 "is_configured": true, 00:15:21.559 "data_offset": 2048, 00:15:21.559 "data_size": 63488 00:15:21.559 }, 00:15:21.559 { 00:15:21.559 "name": "pt3", 00:15:21.559 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:21.559 "is_configured": true, 00:15:21.559 "data_offset": 2048, 00:15:21.559 "data_size": 63488 00:15:21.559 } 00:15:21.559 ] 00:15:21.559 } 00:15:21.559 } 00:15:21.559 }' 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:21.559 pt2 00:15:21.559 pt3' 00:15:21.559 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:21.820 [2024-11-18 04:04:18.352109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9ce88c23-aa08-4406-b09e-0da96f0668c2 '!=' 9ce88c23-aa08-4406-b09e-0da96f0668c2 ']' 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.820 [2024-11-18 04:04:18.399913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.820 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.821 "name": "raid_bdev1", 00:15:21.821 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:21.821 "strip_size_kb": 64, 00:15:21.821 "state": "online", 00:15:21.821 "raid_level": "raid5f", 00:15:21.821 "superblock": true, 00:15:21.821 "num_base_bdevs": 3, 00:15:21.821 "num_base_bdevs_discovered": 2, 00:15:21.821 "num_base_bdevs_operational": 2, 00:15:21.821 "base_bdevs_list": [ 00:15:21.821 { 00:15:21.821 "name": null, 00:15:21.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.821 "is_configured": false, 00:15:21.821 "data_offset": 0, 00:15:21.821 "data_size": 63488 00:15:21.821 }, 00:15:21.821 { 00:15:21.821 "name": "pt2", 00:15:21.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.821 "is_configured": true, 00:15:21.821 "data_offset": 2048, 00:15:21.821 "data_size": 63488 00:15:21.821 }, 00:15:21.821 { 00:15:21.821 "name": "pt3", 00:15:21.821 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:21.821 "is_configured": true, 00:15:21.821 "data_offset": 2048, 00:15:21.821 "data_size": 63488 00:15:21.821 } 00:15:21.821 ] 00:15:21.821 }' 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.821 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.391 [2024-11-18 04:04:18.831108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.391 [2024-11-18 04:04:18.831137] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.391 [2024-11-18 04:04:18.831206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.391 [2024-11-18 04:04:18.831258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.391 [2024-11-18 04:04:18.831271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.391 [2024-11-18 04:04:18.914936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:22.391 [2024-11-18 04:04:18.914979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.391 [2024-11-18 04:04:18.914994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:22.391 [2024-11-18 04:04:18.915004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.391 [2024-11-18 04:04:18.917102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.391 [2024-11-18 04:04:18.917137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:22.391 [2024-11-18 04:04:18.917204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:22.391 [2024-11-18 04:04:18.917254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:22.391 pt2 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.391 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.392 "name": "raid_bdev1", 00:15:22.392 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:22.392 "strip_size_kb": 64, 00:15:22.392 "state": "configuring", 00:15:22.392 "raid_level": "raid5f", 00:15:22.392 "superblock": true, 00:15:22.392 "num_base_bdevs": 3, 00:15:22.392 "num_base_bdevs_discovered": 1, 00:15:22.392 "num_base_bdevs_operational": 2, 00:15:22.392 "base_bdevs_list": [ 00:15:22.392 { 00:15:22.392 "name": null, 00:15:22.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.392 "is_configured": false, 00:15:22.392 "data_offset": 2048, 00:15:22.392 "data_size": 63488 00:15:22.392 }, 00:15:22.392 { 00:15:22.392 "name": "pt2", 00:15:22.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.392 "is_configured": true, 00:15:22.392 "data_offset": 2048, 00:15:22.392 "data_size": 63488 00:15:22.392 }, 00:15:22.392 { 00:15:22.392 "name": null, 00:15:22.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:22.392 "is_configured": false, 00:15:22.392 "data_offset": 2048, 00:15:22.392 "data_size": 63488 00:15:22.392 } 00:15:22.392 ] 00:15:22.392 }' 00:15:22.392 04:04:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.392 04:04:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.964 [2024-11-18 04:04:19.334201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:22.964 [2024-11-18 04:04:19.334265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.964 [2024-11-18 04:04:19.334284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:22.964 [2024-11-18 04:04:19.334294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.964 [2024-11-18 04:04:19.334697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.964 [2024-11-18 04:04:19.334717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:22.964 [2024-11-18 04:04:19.334783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:22.964 [2024-11-18 04:04:19.334813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:22.964 [2024-11-18 04:04:19.334938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:22.964 [2024-11-18 04:04:19.334949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:22.964 [2024-11-18 04:04:19.335175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:22.964 pt3 00:15:22.964 [2024-11-18 04:04:19.340596] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:22.964 [2024-11-18 04:04:19.340620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:22.964 [2024-11-18 04:04:19.340906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.964 "name": "raid_bdev1", 00:15:22.964 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:22.964 "strip_size_kb": 64, 00:15:22.964 "state": "online", 00:15:22.964 "raid_level": "raid5f", 00:15:22.964 "superblock": true, 00:15:22.964 "num_base_bdevs": 3, 00:15:22.964 "num_base_bdevs_discovered": 2, 00:15:22.964 "num_base_bdevs_operational": 2, 00:15:22.964 "base_bdevs_list": [ 00:15:22.964 { 00:15:22.964 "name": null, 00:15:22.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.964 "is_configured": false, 00:15:22.964 "data_offset": 2048, 00:15:22.964 "data_size": 63488 00:15:22.964 }, 00:15:22.964 { 00:15:22.964 "name": "pt2", 00:15:22.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.964 "is_configured": true, 00:15:22.964 "data_offset": 2048, 00:15:22.964 "data_size": 63488 00:15:22.964 }, 00:15:22.964 { 00:15:22.964 "name": "pt3", 00:15:22.964 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:22.964 "is_configured": true, 00:15:22.964 "data_offset": 2048, 00:15:22.964 "data_size": 63488 00:15:22.964 } 00:15:22.964 ] 00:15:22.964 }' 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.964 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.223 [2024-11-18 04:04:19.798731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.223 [2024-11-18 04:04:19.798761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.223 [2024-11-18 04:04:19.798840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.223 [2024-11-18 04:04:19.798897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.223 [2024-11-18 04:04:19.798907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.223 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.482 [2024-11-18 04:04:19.866628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:23.482 [2024-11-18 04:04:19.866690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.482 [2024-11-18 04:04:19.866707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:23.482 [2024-11-18 04:04:19.866716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.482 [2024-11-18 04:04:19.868884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.482 [2024-11-18 04:04:19.868916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:23.482 [2024-11-18 04:04:19.868984] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:23.482 [2024-11-18 04:04:19.869025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:23.482 [2024-11-18 04:04:19.869146] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:23.482 [2024-11-18 04:04:19.869162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.482 [2024-11-18 04:04:19.869177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:23.482 [2024-11-18 04:04:19.869238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:23.482 pt1 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.482 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.483 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.483 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.483 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.483 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.483 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.483 "name": "raid_bdev1", 00:15:23.483 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:23.483 "strip_size_kb": 64, 00:15:23.483 "state": "configuring", 00:15:23.483 "raid_level": "raid5f", 00:15:23.483 "superblock": true, 00:15:23.483 "num_base_bdevs": 3, 00:15:23.483 "num_base_bdevs_discovered": 1, 00:15:23.483 "num_base_bdevs_operational": 2, 00:15:23.483 "base_bdevs_list": [ 00:15:23.483 { 00:15:23.483 "name": null, 00:15:23.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.483 "is_configured": false, 00:15:23.483 "data_offset": 2048, 00:15:23.483 "data_size": 63488 00:15:23.483 }, 00:15:23.483 { 00:15:23.483 "name": "pt2", 00:15:23.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.483 "is_configured": true, 00:15:23.483 "data_offset": 2048, 00:15:23.483 "data_size": 63488 00:15:23.483 }, 00:15:23.483 { 00:15:23.483 "name": null, 00:15:23.483 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:23.483 "is_configured": false, 00:15:23.483 "data_offset": 2048, 00:15:23.483 "data_size": 63488 00:15:23.483 } 00:15:23.483 ] 00:15:23.483 }' 00:15:23.483 04:04:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.483 04:04:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.742 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:23.742 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:23.742 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.742 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.742 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.743 [2024-11-18 04:04:20.369747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:23.743 [2024-11-18 04:04:20.369813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.743 [2024-11-18 04:04:20.369831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:23.743 [2024-11-18 04:04:20.369850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.743 [2024-11-18 04:04:20.370271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.743 [2024-11-18 04:04:20.370287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:23.743 [2024-11-18 04:04:20.370356] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:23.743 [2024-11-18 04:04:20.370377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:23.743 [2024-11-18 04:04:20.370490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:23.743 [2024-11-18 04:04:20.370498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:23.743 [2024-11-18 04:04:20.370727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:23.743 [2024-11-18 04:04:20.376257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:23.743 [2024-11-18 04:04:20.376285] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:23.743 [2024-11-18 04:04:20.376500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.743 pt3 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.743 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.003 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.003 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.003 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.003 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.003 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.003 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.003 "name": "raid_bdev1", 00:15:24.003 "uuid": "9ce88c23-aa08-4406-b09e-0da96f0668c2", 00:15:24.003 "strip_size_kb": 64, 00:15:24.003 "state": "online", 00:15:24.003 "raid_level": "raid5f", 00:15:24.003 "superblock": true, 00:15:24.003 "num_base_bdevs": 3, 00:15:24.003 "num_base_bdevs_discovered": 2, 00:15:24.003 "num_base_bdevs_operational": 2, 00:15:24.003 "base_bdevs_list": [ 00:15:24.003 { 00:15:24.003 "name": null, 00:15:24.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.003 "is_configured": false, 00:15:24.003 "data_offset": 2048, 00:15:24.003 "data_size": 63488 00:15:24.003 }, 00:15:24.003 { 00:15:24.003 "name": "pt2", 00:15:24.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.003 "is_configured": true, 00:15:24.003 "data_offset": 2048, 00:15:24.003 "data_size": 63488 00:15:24.003 }, 00:15:24.003 { 00:15:24.003 "name": "pt3", 00:15:24.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:24.003 "is_configured": true, 00:15:24.003 "data_offset": 2048, 00:15:24.003 "data_size": 63488 00:15:24.003 } 00:15:24.003 ] 00:15:24.003 }' 00:15:24.003 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.003 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.263 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:24.263 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:24.263 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.263 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.263 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.263 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:24.263 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:24.263 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.263 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.263 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:24.263 [2024-11-18 04:04:20.874454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.263 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.522 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9ce88c23-aa08-4406-b09e-0da96f0668c2 '!=' 9ce88c23-aa08-4406-b09e-0da96f0668c2 ']' 00:15:24.522 04:04:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81034 00:15:24.522 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81034 ']' 00:15:24.522 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81034 00:15:24.522 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:24.522 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.522 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81034 00:15:24.522 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:24.522 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:24.522 killing process with pid 81034 00:15:24.523 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81034' 00:15:24.523 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81034 00:15:24.523 [2024-11-18 04:04:20.956511] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:24.523 [2024-11-18 04:04:20.956603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.523 [2024-11-18 04:04:20.956670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.523 [2024-11-18 04:04:20.956683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:24.523 04:04:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81034 00:15:24.781 [2024-11-18 04:04:21.236840] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.749 04:04:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:25.749 00:15:25.749 real 0m7.614s 00:15:25.749 user 0m11.998s 00:15:25.749 sys 0m1.359s 00:15:25.749 04:04:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.749 04:04:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.749 ************************************ 00:15:25.749 END TEST raid5f_superblock_test 00:15:25.749 ************************************ 00:15:25.749 04:04:22 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:25.749 04:04:22 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:25.749 04:04:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:25.749 04:04:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.749 04:04:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.749 ************************************ 00:15:25.749 START TEST raid5f_rebuild_test 00:15:25.749 ************************************ 00:15:25.749 04:04:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:25.749 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:25.749 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:25.749 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:25.749 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:25.749 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:25.749 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:25.749 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.749 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81478 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81478 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81478 ']' 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.750 04:04:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.011 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:26.011 Zero copy mechanism will not be used. 00:15:26.011 [2024-11-18 04:04:22.430973] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:26.011 [2024-11-18 04:04:22.431079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81478 ] 00:15:26.011 [2024-11-18 04:04:22.604059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.275 [2024-11-18 04:04:22.703714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.275 [2024-11-18 04:04:22.886448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.275 [2024-11-18 04:04:22.886497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.843 BaseBdev1_malloc 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.843 [2024-11-18 04:04:23.276074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:26.843 [2024-11-18 04:04:23.276139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.843 [2024-11-18 04:04:23.276162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:26.843 [2024-11-18 04:04:23.276173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.843 [2024-11-18 04:04:23.278131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.843 [2024-11-18 04:04:23.278167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:26.843 BaseBdev1 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.843 BaseBdev2_malloc 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.843 [2024-11-18 04:04:23.325263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:26.843 [2024-11-18 04:04:23.325331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.843 [2024-11-18 04:04:23.325348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:26.843 [2024-11-18 04:04:23.325360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.843 [2024-11-18 04:04:23.327298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.843 [2024-11-18 04:04:23.327333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:26.843 BaseBdev2 00:15:26.843 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.844 BaseBdev3_malloc 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.844 [2024-11-18 04:04:23.411039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:26.844 [2024-11-18 04:04:23.411104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.844 [2024-11-18 04:04:23.411125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:26.844 [2024-11-18 04:04:23.411135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.844 [2024-11-18 04:04:23.413057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.844 [2024-11-18 04:04:23.413096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:26.844 BaseBdev3 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.844 spare_malloc 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.844 spare_delay 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.844 [2024-11-18 04:04:23.475536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:26.844 [2024-11-18 04:04:23.475600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.844 [2024-11-18 04:04:23.475617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:26.844 [2024-11-18 04:04:23.475626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.844 [2024-11-18 04:04:23.477596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.844 [2024-11-18 04:04:23.477634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:26.844 spare 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.844 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.103 [2024-11-18 04:04:23.487588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.103 [2024-11-18 04:04:23.489294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.103 [2024-11-18 04:04:23.489369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.103 [2024-11-18 04:04:23.489444] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:27.103 [2024-11-18 04:04:23.489454] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:27.103 [2024-11-18 04:04:23.489697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:27.103 [2024-11-18 04:04:23.495159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:27.103 [2024-11-18 04:04:23.495182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:27.103 [2024-11-18 04:04:23.495370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.103 "name": "raid_bdev1", 00:15:27.103 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:27.103 "strip_size_kb": 64, 00:15:27.103 "state": "online", 00:15:27.103 "raid_level": "raid5f", 00:15:27.103 "superblock": false, 00:15:27.103 "num_base_bdevs": 3, 00:15:27.103 "num_base_bdevs_discovered": 3, 00:15:27.103 "num_base_bdevs_operational": 3, 00:15:27.103 "base_bdevs_list": [ 00:15:27.103 { 00:15:27.103 "name": "BaseBdev1", 00:15:27.103 "uuid": "23ee5747-4148-5eb4-8a44-d7512fd66776", 00:15:27.103 "is_configured": true, 00:15:27.103 "data_offset": 0, 00:15:27.103 "data_size": 65536 00:15:27.103 }, 00:15:27.103 { 00:15:27.103 "name": "BaseBdev2", 00:15:27.103 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:27.103 "is_configured": true, 00:15:27.103 "data_offset": 0, 00:15:27.103 "data_size": 65536 00:15:27.103 }, 00:15:27.103 { 00:15:27.103 "name": "BaseBdev3", 00:15:27.103 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:27.103 "is_configured": true, 00:15:27.103 "data_offset": 0, 00:15:27.103 "data_size": 65536 00:15:27.103 } 00:15:27.103 ] 00:15:27.103 }' 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.103 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.362 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:27.362 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.362 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.362 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.362 [2024-11-18 04:04:23.936960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.362 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.362 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:27.362 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.362 04:04:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:27.362 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.362 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.362 04:04:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:27.621 [2024-11-18 04:04:24.172439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:27.621 /dev/nbd0 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.621 1+0 records in 00:15:27.621 1+0 records out 00:15:27.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364452 s, 11.2 MB/s 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:27.621 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:28.189 512+0 records in 00:15:28.189 512+0 records out 00:15:28.189 67108864 bytes (67 MB, 64 MiB) copied, 0.356558 s, 188 MB/s 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:28.189 [2024-11-18 04:04:24.811923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.189 [2024-11-18 04:04:24.823345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.189 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.190 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.190 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.190 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.450 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.450 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.450 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.450 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.450 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.450 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.450 "name": "raid_bdev1", 00:15:28.450 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:28.450 "strip_size_kb": 64, 00:15:28.450 "state": "online", 00:15:28.450 "raid_level": "raid5f", 00:15:28.450 "superblock": false, 00:15:28.450 "num_base_bdevs": 3, 00:15:28.450 "num_base_bdevs_discovered": 2, 00:15:28.450 "num_base_bdevs_operational": 2, 00:15:28.450 "base_bdevs_list": [ 00:15:28.450 { 00:15:28.450 "name": null, 00:15:28.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.450 "is_configured": false, 00:15:28.450 "data_offset": 0, 00:15:28.450 "data_size": 65536 00:15:28.450 }, 00:15:28.450 { 00:15:28.450 "name": "BaseBdev2", 00:15:28.450 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:28.450 "is_configured": true, 00:15:28.450 "data_offset": 0, 00:15:28.450 "data_size": 65536 00:15:28.450 }, 00:15:28.450 { 00:15:28.450 "name": "BaseBdev3", 00:15:28.450 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:28.450 "is_configured": true, 00:15:28.450 "data_offset": 0, 00:15:28.450 "data_size": 65536 00:15:28.450 } 00:15:28.450 ] 00:15:28.450 }' 00:15:28.450 04:04:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.450 04:04:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.709 04:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:28.709 04:04:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.709 04:04:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.709 [2024-11-18 04:04:25.294533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.709 [2024-11-18 04:04:25.310555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:28.710 04:04:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.710 04:04:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:28.710 [2024-11-18 04:04:25.317341] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.090 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.090 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.090 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.090 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.090 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.090 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.090 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.090 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.090 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.090 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.090 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.091 "name": "raid_bdev1", 00:15:30.091 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:30.091 "strip_size_kb": 64, 00:15:30.091 "state": "online", 00:15:30.091 "raid_level": "raid5f", 00:15:30.091 "superblock": false, 00:15:30.091 "num_base_bdevs": 3, 00:15:30.091 "num_base_bdevs_discovered": 3, 00:15:30.091 "num_base_bdevs_operational": 3, 00:15:30.091 "process": { 00:15:30.091 "type": "rebuild", 00:15:30.091 "target": "spare", 00:15:30.091 "progress": { 00:15:30.091 "blocks": 20480, 00:15:30.091 "percent": 15 00:15:30.091 } 00:15:30.091 }, 00:15:30.091 "base_bdevs_list": [ 00:15:30.091 { 00:15:30.091 "name": "spare", 00:15:30.091 "uuid": "6f9ccfe1-79b0-5b1e-96ec-a56762a8964d", 00:15:30.091 "is_configured": true, 00:15:30.091 "data_offset": 0, 00:15:30.091 "data_size": 65536 00:15:30.091 }, 00:15:30.091 { 00:15:30.091 "name": "BaseBdev2", 00:15:30.091 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:30.091 "is_configured": true, 00:15:30.091 "data_offset": 0, 00:15:30.091 "data_size": 65536 00:15:30.091 }, 00:15:30.091 { 00:15:30.091 "name": "BaseBdev3", 00:15:30.091 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:30.091 "is_configured": true, 00:15:30.091 "data_offset": 0, 00:15:30.091 "data_size": 65536 00:15:30.091 } 00:15:30.091 ] 00:15:30.091 }' 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.091 [2024-11-18 04:04:26.436407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.091 [2024-11-18 04:04:26.524475] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:30.091 [2024-11-18 04:04:26.524546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.091 [2024-11-18 04:04:26.524562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.091 [2024-11-18 04:04:26.524570] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.091 "name": "raid_bdev1", 00:15:30.091 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:30.091 "strip_size_kb": 64, 00:15:30.091 "state": "online", 00:15:30.091 "raid_level": "raid5f", 00:15:30.091 "superblock": false, 00:15:30.091 "num_base_bdevs": 3, 00:15:30.091 "num_base_bdevs_discovered": 2, 00:15:30.091 "num_base_bdevs_operational": 2, 00:15:30.091 "base_bdevs_list": [ 00:15:30.091 { 00:15:30.091 "name": null, 00:15:30.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.091 "is_configured": false, 00:15:30.091 "data_offset": 0, 00:15:30.091 "data_size": 65536 00:15:30.091 }, 00:15:30.091 { 00:15:30.091 "name": "BaseBdev2", 00:15:30.091 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:30.091 "is_configured": true, 00:15:30.091 "data_offset": 0, 00:15:30.091 "data_size": 65536 00:15:30.091 }, 00:15:30.091 { 00:15:30.091 "name": "BaseBdev3", 00:15:30.091 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:30.091 "is_configured": true, 00:15:30.091 "data_offset": 0, 00:15:30.091 "data_size": 65536 00:15:30.091 } 00:15:30.091 ] 00:15:30.091 }' 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.091 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.351 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.351 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.351 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.351 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.351 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.351 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.351 04:04:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.351 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.351 04:04:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.611 04:04:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.611 04:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.611 "name": "raid_bdev1", 00:15:30.611 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:30.611 "strip_size_kb": 64, 00:15:30.611 "state": "online", 00:15:30.611 "raid_level": "raid5f", 00:15:30.611 "superblock": false, 00:15:30.611 "num_base_bdevs": 3, 00:15:30.611 "num_base_bdevs_discovered": 2, 00:15:30.611 "num_base_bdevs_operational": 2, 00:15:30.611 "base_bdevs_list": [ 00:15:30.611 { 00:15:30.611 "name": null, 00:15:30.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.611 "is_configured": false, 00:15:30.611 "data_offset": 0, 00:15:30.611 "data_size": 65536 00:15:30.611 }, 00:15:30.611 { 00:15:30.611 "name": "BaseBdev2", 00:15:30.611 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:30.611 "is_configured": true, 00:15:30.611 "data_offset": 0, 00:15:30.611 "data_size": 65536 00:15:30.611 }, 00:15:30.611 { 00:15:30.611 "name": "BaseBdev3", 00:15:30.611 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:30.612 "is_configured": true, 00:15:30.612 "data_offset": 0, 00:15:30.612 "data_size": 65536 00:15:30.612 } 00:15:30.612 ] 00:15:30.612 }' 00:15:30.612 04:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.612 04:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.612 04:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.612 04:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.612 04:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:30.612 04:04:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.612 04:04:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.612 [2024-11-18 04:04:27.116402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:30.612 [2024-11-18 04:04:27.133218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:30.612 04:04:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.612 04:04:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:30.612 [2024-11-18 04:04:27.140550] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:31.555 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.556 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.556 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.556 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.556 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.556 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.556 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.556 04:04:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.556 04:04:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.556 04:04:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.556 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.556 "name": "raid_bdev1", 00:15:31.556 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:31.556 "strip_size_kb": 64, 00:15:31.556 "state": "online", 00:15:31.556 "raid_level": "raid5f", 00:15:31.556 "superblock": false, 00:15:31.556 "num_base_bdevs": 3, 00:15:31.556 "num_base_bdevs_discovered": 3, 00:15:31.556 "num_base_bdevs_operational": 3, 00:15:31.556 "process": { 00:15:31.556 "type": "rebuild", 00:15:31.556 "target": "spare", 00:15:31.556 "progress": { 00:15:31.556 "blocks": 20480, 00:15:31.556 "percent": 15 00:15:31.556 } 00:15:31.556 }, 00:15:31.556 "base_bdevs_list": [ 00:15:31.556 { 00:15:31.556 "name": "spare", 00:15:31.556 "uuid": "6f9ccfe1-79b0-5b1e-96ec-a56762a8964d", 00:15:31.556 "is_configured": true, 00:15:31.556 "data_offset": 0, 00:15:31.556 "data_size": 65536 00:15:31.556 }, 00:15:31.556 { 00:15:31.556 "name": "BaseBdev2", 00:15:31.556 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:31.556 "is_configured": true, 00:15:31.556 "data_offset": 0, 00:15:31.556 "data_size": 65536 00:15:31.556 }, 00:15:31.556 { 00:15:31.556 "name": "BaseBdev3", 00:15:31.556 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:31.556 "is_configured": true, 00:15:31.556 "data_offset": 0, 00:15:31.556 "data_size": 65536 00:15:31.556 } 00:15:31.556 ] 00:15:31.556 }' 00:15:31.556 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=542 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.818 "name": "raid_bdev1", 00:15:31.818 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:31.818 "strip_size_kb": 64, 00:15:31.818 "state": "online", 00:15:31.818 "raid_level": "raid5f", 00:15:31.818 "superblock": false, 00:15:31.818 "num_base_bdevs": 3, 00:15:31.818 "num_base_bdevs_discovered": 3, 00:15:31.818 "num_base_bdevs_operational": 3, 00:15:31.818 "process": { 00:15:31.818 "type": "rebuild", 00:15:31.818 "target": "spare", 00:15:31.818 "progress": { 00:15:31.818 "blocks": 22528, 00:15:31.818 "percent": 17 00:15:31.818 } 00:15:31.818 }, 00:15:31.818 "base_bdevs_list": [ 00:15:31.818 { 00:15:31.818 "name": "spare", 00:15:31.818 "uuid": "6f9ccfe1-79b0-5b1e-96ec-a56762a8964d", 00:15:31.818 "is_configured": true, 00:15:31.818 "data_offset": 0, 00:15:31.818 "data_size": 65536 00:15:31.818 }, 00:15:31.818 { 00:15:31.818 "name": "BaseBdev2", 00:15:31.818 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:31.818 "is_configured": true, 00:15:31.818 "data_offset": 0, 00:15:31.818 "data_size": 65536 00:15:31.818 }, 00:15:31.818 { 00:15:31.818 "name": "BaseBdev3", 00:15:31.818 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:31.818 "is_configured": true, 00:15:31.818 "data_offset": 0, 00:15:31.818 "data_size": 65536 00:15:31.818 } 00:15:31.818 ] 00:15:31.818 }' 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.818 04:04:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.197 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.197 "name": "raid_bdev1", 00:15:33.197 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:33.197 "strip_size_kb": 64, 00:15:33.197 "state": "online", 00:15:33.197 "raid_level": "raid5f", 00:15:33.197 "superblock": false, 00:15:33.197 "num_base_bdevs": 3, 00:15:33.197 "num_base_bdevs_discovered": 3, 00:15:33.197 "num_base_bdevs_operational": 3, 00:15:33.198 "process": { 00:15:33.198 "type": "rebuild", 00:15:33.198 "target": "spare", 00:15:33.198 "progress": { 00:15:33.198 "blocks": 45056, 00:15:33.198 "percent": 34 00:15:33.198 } 00:15:33.198 }, 00:15:33.198 "base_bdevs_list": [ 00:15:33.198 { 00:15:33.198 "name": "spare", 00:15:33.198 "uuid": "6f9ccfe1-79b0-5b1e-96ec-a56762a8964d", 00:15:33.198 "is_configured": true, 00:15:33.198 "data_offset": 0, 00:15:33.198 "data_size": 65536 00:15:33.198 }, 00:15:33.198 { 00:15:33.198 "name": "BaseBdev2", 00:15:33.198 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:33.198 "is_configured": true, 00:15:33.198 "data_offset": 0, 00:15:33.198 "data_size": 65536 00:15:33.198 }, 00:15:33.198 { 00:15:33.198 "name": "BaseBdev3", 00:15:33.198 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:33.198 "is_configured": true, 00:15:33.198 "data_offset": 0, 00:15:33.198 "data_size": 65536 00:15:33.198 } 00:15:33.198 ] 00:15:33.198 }' 00:15:33.198 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.198 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.198 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.198 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.198 04:04:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.138 "name": "raid_bdev1", 00:15:34.138 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:34.138 "strip_size_kb": 64, 00:15:34.138 "state": "online", 00:15:34.138 "raid_level": "raid5f", 00:15:34.138 "superblock": false, 00:15:34.138 "num_base_bdevs": 3, 00:15:34.138 "num_base_bdevs_discovered": 3, 00:15:34.138 "num_base_bdevs_operational": 3, 00:15:34.138 "process": { 00:15:34.138 "type": "rebuild", 00:15:34.138 "target": "spare", 00:15:34.138 "progress": { 00:15:34.138 "blocks": 69632, 00:15:34.138 "percent": 53 00:15:34.138 } 00:15:34.138 }, 00:15:34.138 "base_bdevs_list": [ 00:15:34.138 { 00:15:34.138 "name": "spare", 00:15:34.138 "uuid": "6f9ccfe1-79b0-5b1e-96ec-a56762a8964d", 00:15:34.138 "is_configured": true, 00:15:34.138 "data_offset": 0, 00:15:34.138 "data_size": 65536 00:15:34.138 }, 00:15:34.138 { 00:15:34.138 "name": "BaseBdev2", 00:15:34.138 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:34.138 "is_configured": true, 00:15:34.138 "data_offset": 0, 00:15:34.138 "data_size": 65536 00:15:34.138 }, 00:15:34.138 { 00:15:34.138 "name": "BaseBdev3", 00:15:34.138 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:34.138 "is_configured": true, 00:15:34.138 "data_offset": 0, 00:15:34.138 "data_size": 65536 00:15:34.138 } 00:15:34.138 ] 00:15:34.138 }' 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.138 04:04:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.519 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.519 "name": "raid_bdev1", 00:15:35.519 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:35.519 "strip_size_kb": 64, 00:15:35.519 "state": "online", 00:15:35.519 "raid_level": "raid5f", 00:15:35.519 "superblock": false, 00:15:35.519 "num_base_bdevs": 3, 00:15:35.519 "num_base_bdevs_discovered": 3, 00:15:35.519 "num_base_bdevs_operational": 3, 00:15:35.519 "process": { 00:15:35.519 "type": "rebuild", 00:15:35.519 "target": "spare", 00:15:35.519 "progress": { 00:15:35.519 "blocks": 92160, 00:15:35.519 "percent": 70 00:15:35.519 } 00:15:35.519 }, 00:15:35.519 "base_bdevs_list": [ 00:15:35.519 { 00:15:35.519 "name": "spare", 00:15:35.519 "uuid": "6f9ccfe1-79b0-5b1e-96ec-a56762a8964d", 00:15:35.519 "is_configured": true, 00:15:35.519 "data_offset": 0, 00:15:35.519 "data_size": 65536 00:15:35.520 }, 00:15:35.520 { 00:15:35.520 "name": "BaseBdev2", 00:15:35.520 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:35.520 "is_configured": true, 00:15:35.520 "data_offset": 0, 00:15:35.520 "data_size": 65536 00:15:35.520 }, 00:15:35.520 { 00:15:35.520 "name": "BaseBdev3", 00:15:35.520 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:35.520 "is_configured": true, 00:15:35.520 "data_offset": 0, 00:15:35.520 "data_size": 65536 00:15:35.520 } 00:15:35.520 ] 00:15:35.520 }' 00:15:35.520 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.520 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.520 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.520 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.520 04:04:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.459 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.459 "name": "raid_bdev1", 00:15:36.459 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:36.459 "strip_size_kb": 64, 00:15:36.459 "state": "online", 00:15:36.459 "raid_level": "raid5f", 00:15:36.459 "superblock": false, 00:15:36.459 "num_base_bdevs": 3, 00:15:36.459 "num_base_bdevs_discovered": 3, 00:15:36.459 "num_base_bdevs_operational": 3, 00:15:36.459 "process": { 00:15:36.459 "type": "rebuild", 00:15:36.459 "target": "spare", 00:15:36.459 "progress": { 00:15:36.459 "blocks": 116736, 00:15:36.459 "percent": 89 00:15:36.459 } 00:15:36.459 }, 00:15:36.459 "base_bdevs_list": [ 00:15:36.459 { 00:15:36.459 "name": "spare", 00:15:36.459 "uuid": "6f9ccfe1-79b0-5b1e-96ec-a56762a8964d", 00:15:36.459 "is_configured": true, 00:15:36.459 "data_offset": 0, 00:15:36.459 "data_size": 65536 00:15:36.459 }, 00:15:36.459 { 00:15:36.459 "name": "BaseBdev2", 00:15:36.459 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:36.460 "is_configured": true, 00:15:36.460 "data_offset": 0, 00:15:36.460 "data_size": 65536 00:15:36.460 }, 00:15:36.460 { 00:15:36.460 "name": "BaseBdev3", 00:15:36.460 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:36.460 "is_configured": true, 00:15:36.460 "data_offset": 0, 00:15:36.460 "data_size": 65536 00:15:36.460 } 00:15:36.460 ] 00:15:36.460 }' 00:15:36.460 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.460 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.460 04:04:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.460 04:04:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.460 04:04:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.034 [2024-11-18 04:04:33.575439] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:37.034 [2024-11-18 04:04:33.575524] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:37.034 [2024-11-18 04:04:33.575565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.616 "name": "raid_bdev1", 00:15:37.616 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:37.616 "strip_size_kb": 64, 00:15:37.616 "state": "online", 00:15:37.616 "raid_level": "raid5f", 00:15:37.616 "superblock": false, 00:15:37.616 "num_base_bdevs": 3, 00:15:37.616 "num_base_bdevs_discovered": 3, 00:15:37.616 "num_base_bdevs_operational": 3, 00:15:37.616 "base_bdevs_list": [ 00:15:37.616 { 00:15:37.616 "name": "spare", 00:15:37.616 "uuid": "6f9ccfe1-79b0-5b1e-96ec-a56762a8964d", 00:15:37.616 "is_configured": true, 00:15:37.616 "data_offset": 0, 00:15:37.616 "data_size": 65536 00:15:37.616 }, 00:15:37.616 { 00:15:37.616 "name": "BaseBdev2", 00:15:37.616 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:37.616 "is_configured": true, 00:15:37.616 "data_offset": 0, 00:15:37.616 "data_size": 65536 00:15:37.616 }, 00:15:37.616 { 00:15:37.616 "name": "BaseBdev3", 00:15:37.616 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:37.616 "is_configured": true, 00:15:37.616 "data_offset": 0, 00:15:37.616 "data_size": 65536 00:15:37.616 } 00:15:37.616 ] 00:15:37.616 }' 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.616 "name": "raid_bdev1", 00:15:37.616 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:37.616 "strip_size_kb": 64, 00:15:37.616 "state": "online", 00:15:37.616 "raid_level": "raid5f", 00:15:37.616 "superblock": false, 00:15:37.616 "num_base_bdevs": 3, 00:15:37.616 "num_base_bdevs_discovered": 3, 00:15:37.616 "num_base_bdevs_operational": 3, 00:15:37.616 "base_bdevs_list": [ 00:15:37.616 { 00:15:37.616 "name": "spare", 00:15:37.616 "uuid": "6f9ccfe1-79b0-5b1e-96ec-a56762a8964d", 00:15:37.616 "is_configured": true, 00:15:37.616 "data_offset": 0, 00:15:37.616 "data_size": 65536 00:15:37.616 }, 00:15:37.616 { 00:15:37.616 "name": "BaseBdev2", 00:15:37.616 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:37.616 "is_configured": true, 00:15:37.616 "data_offset": 0, 00:15:37.616 "data_size": 65536 00:15:37.616 }, 00:15:37.616 { 00:15:37.616 "name": "BaseBdev3", 00:15:37.616 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:37.616 "is_configured": true, 00:15:37.616 "data_offset": 0, 00:15:37.616 "data_size": 65536 00:15:37.616 } 00:15:37.616 ] 00:15:37.616 }' 00:15:37.616 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.876 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.876 "name": "raid_bdev1", 00:15:37.876 "uuid": "e205bb24-63be-47cf-bf5f-cf41a75a2c35", 00:15:37.876 "strip_size_kb": 64, 00:15:37.876 "state": "online", 00:15:37.876 "raid_level": "raid5f", 00:15:37.876 "superblock": false, 00:15:37.876 "num_base_bdevs": 3, 00:15:37.876 "num_base_bdevs_discovered": 3, 00:15:37.876 "num_base_bdevs_operational": 3, 00:15:37.876 "base_bdevs_list": [ 00:15:37.876 { 00:15:37.876 "name": "spare", 00:15:37.876 "uuid": "6f9ccfe1-79b0-5b1e-96ec-a56762a8964d", 00:15:37.876 "is_configured": true, 00:15:37.876 "data_offset": 0, 00:15:37.876 "data_size": 65536 00:15:37.876 }, 00:15:37.876 { 00:15:37.876 "name": "BaseBdev2", 00:15:37.877 "uuid": "3be3d3a7-99bc-582f-914a-c180ddeb9fff", 00:15:37.877 "is_configured": true, 00:15:37.877 "data_offset": 0, 00:15:37.877 "data_size": 65536 00:15:37.877 }, 00:15:37.877 { 00:15:37.877 "name": "BaseBdev3", 00:15:37.877 "uuid": "6362d5bb-0ecf-5ef0-b70e-425064bd56e3", 00:15:37.877 "is_configured": true, 00:15:37.877 "data_offset": 0, 00:15:37.877 "data_size": 65536 00:15:37.877 } 00:15:37.877 ] 00:15:37.877 }' 00:15:37.877 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.877 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.474 [2024-11-18 04:04:34.806919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:38.474 [2024-11-18 04:04:34.806948] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.474 [2024-11-18 04:04:34.807028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.474 [2024-11-18 04:04:34.807110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:38.474 [2024-11-18 04:04:34.807129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:38.474 04:04:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:38.474 /dev/nbd0 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.474 1+0 records in 00:15:38.474 1+0 records out 00:15:38.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321283 s, 12.7 MB/s 00:15:38.474 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:38.734 /dev/nbd1 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.734 1+0 records in 00:15:38.734 1+0 records out 00:15:38.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433658 s, 9.4 MB/s 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:38.734 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:38.994 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:38.994 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.994 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:38.994 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.994 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:38.994 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.994 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:39.254 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:39.254 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:39.254 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:39.254 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.254 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.254 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:39.254 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:39.254 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.254 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.254 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81478 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81478 ']' 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81478 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81478 00:15:39.514 killing process with pid 81478 00:15:39.514 Received shutdown signal, test time was about 60.000000 seconds 00:15:39.514 00:15:39.514 Latency(us) 00:15:39.514 [2024-11-18T04:04:36.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.514 [2024-11-18T04:04:36.155Z] =================================================================================================================== 00:15:39.514 [2024-11-18T04:04:36.155Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81478' 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81478 00:15:39.514 [2024-11-18 04:04:35.987466] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.514 04:04:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81478 00:15:39.775 [2024-11-18 04:04:36.364481] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:41.158 00:15:41.158 real 0m15.042s 00:15:41.158 user 0m18.534s 00:15:41.158 sys 0m1.912s 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.158 ************************************ 00:15:41.158 END TEST raid5f_rebuild_test 00:15:41.158 ************************************ 00:15:41.158 04:04:37 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:41.158 04:04:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:41.158 04:04:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.158 04:04:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.158 ************************************ 00:15:41.158 START TEST raid5f_rebuild_test_sb 00:15:41.158 ************************************ 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81917 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81917 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81917 ']' 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.158 04:04:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.158 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:41.158 Zero copy mechanism will not be used. 00:15:41.158 [2024-11-18 04:04:37.548999] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:41.158 [2024-11-18 04:04:37.549122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81917 ] 00:15:41.158 [2024-11-18 04:04:37.719116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.418 [2024-11-18 04:04:37.822304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.418 [2024-11-18 04:04:38.009415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.418 [2024-11-18 04:04:38.009454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.989 BaseBdev1_malloc 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.989 [2024-11-18 04:04:38.399629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:41.989 [2024-11-18 04:04:38.399711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.989 [2024-11-18 04:04:38.399733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:41.989 [2024-11-18 04:04:38.399743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.989 [2024-11-18 04:04:38.401752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.989 [2024-11-18 04:04:38.401790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:41.989 BaseBdev1 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.989 BaseBdev2_malloc 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.989 [2024-11-18 04:04:38.455130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:41.989 [2024-11-18 04:04:38.455184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.989 [2024-11-18 04:04:38.455202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:41.989 [2024-11-18 04:04:38.455214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.989 [2024-11-18 04:04:38.457258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.989 [2024-11-18 04:04:38.457293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:41.989 BaseBdev2 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.989 BaseBdev3_malloc 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.989 [2024-11-18 04:04:38.536955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:41.989 [2024-11-18 04:04:38.537007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.989 [2024-11-18 04:04:38.537028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:41.989 [2024-11-18 04:04:38.537038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.989 [2024-11-18 04:04:38.538974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.989 [2024-11-18 04:04:38.539007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:41.989 BaseBdev3 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.989 spare_malloc 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.989 spare_delay 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.989 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.990 [2024-11-18 04:04:38.595319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:41.990 [2024-11-18 04:04:38.595366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.990 [2024-11-18 04:04:38.595382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:41.990 [2024-11-18 04:04:38.595393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.990 [2024-11-18 04:04:38.600765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.990 [2024-11-18 04:04:38.600805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:41.990 spare 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.990 [2024-11-18 04:04:38.604992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.990 [2024-11-18 04:04:38.606656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.990 [2024-11-18 04:04:38.606718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.990 [2024-11-18 04:04:38.606888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:41.990 [2024-11-18 04:04:38.606902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:41.990 [2024-11-18 04:04:38.607133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:41.990 [2024-11-18 04:04:38.612365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:41.990 [2024-11-18 04:04:38.612390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:41.990 [2024-11-18 04:04:38.612564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.990 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.250 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.250 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.250 "name": "raid_bdev1", 00:15:42.250 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:42.250 "strip_size_kb": 64, 00:15:42.250 "state": "online", 00:15:42.250 "raid_level": "raid5f", 00:15:42.250 "superblock": true, 00:15:42.250 "num_base_bdevs": 3, 00:15:42.250 "num_base_bdevs_discovered": 3, 00:15:42.250 "num_base_bdevs_operational": 3, 00:15:42.250 "base_bdevs_list": [ 00:15:42.250 { 00:15:42.250 "name": "BaseBdev1", 00:15:42.250 "uuid": "4af528fb-5d8a-5174-b84a-a74623975294", 00:15:42.250 "is_configured": true, 00:15:42.250 "data_offset": 2048, 00:15:42.250 "data_size": 63488 00:15:42.250 }, 00:15:42.250 { 00:15:42.250 "name": "BaseBdev2", 00:15:42.250 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:42.250 "is_configured": true, 00:15:42.250 "data_offset": 2048, 00:15:42.250 "data_size": 63488 00:15:42.250 }, 00:15:42.250 { 00:15:42.250 "name": "BaseBdev3", 00:15:42.250 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:42.250 "is_configured": true, 00:15:42.250 "data_offset": 2048, 00:15:42.250 "data_size": 63488 00:15:42.250 } 00:15:42.250 ] 00:15:42.250 }' 00:15:42.250 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.250 04:04:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 [2024-11-18 04:04:39.038373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.510 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:42.769 [2024-11-18 04:04:39.305790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:42.769 /dev/nbd0 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:42.769 1+0 records in 00:15:42.769 1+0 records out 00:15:42.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241186 s, 17.0 MB/s 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:42.769 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:43.338 496+0 records in 00:15:43.338 496+0 records out 00:15:43.338 65011712 bytes (65 MB, 62 MiB) copied, 0.35087 s, 185 MB/s 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:43.338 [2024-11-18 04:04:39.932485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.338 [2024-11-18 04:04:39.948116] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.338 04:04:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.597 04:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.597 "name": "raid_bdev1", 00:15:43.597 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:43.597 "strip_size_kb": 64, 00:15:43.597 "state": "online", 00:15:43.597 "raid_level": "raid5f", 00:15:43.598 "superblock": true, 00:15:43.598 "num_base_bdevs": 3, 00:15:43.598 "num_base_bdevs_discovered": 2, 00:15:43.598 "num_base_bdevs_operational": 2, 00:15:43.598 "base_bdevs_list": [ 00:15:43.598 { 00:15:43.598 "name": null, 00:15:43.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.598 "is_configured": false, 00:15:43.598 "data_offset": 0, 00:15:43.598 "data_size": 63488 00:15:43.598 }, 00:15:43.598 { 00:15:43.598 "name": "BaseBdev2", 00:15:43.598 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:43.598 "is_configured": true, 00:15:43.598 "data_offset": 2048, 00:15:43.598 "data_size": 63488 00:15:43.598 }, 00:15:43.598 { 00:15:43.598 "name": "BaseBdev3", 00:15:43.598 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:43.598 "is_configured": true, 00:15:43.598 "data_offset": 2048, 00:15:43.598 "data_size": 63488 00:15:43.598 } 00:15:43.598 ] 00:15:43.598 }' 00:15:43.598 04:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.598 04:04:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.857 04:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:43.857 04:04:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.857 04:04:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.857 [2024-11-18 04:04:40.403420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.857 [2024-11-18 04:04:40.419429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:43.857 04:04:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.857 04:04:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:43.857 [2024-11-18 04:04:40.426596] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.796 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.796 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.796 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.796 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.796 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.796 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.796 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.796 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.796 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.056 "name": "raid_bdev1", 00:15:45.056 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:45.056 "strip_size_kb": 64, 00:15:45.056 "state": "online", 00:15:45.056 "raid_level": "raid5f", 00:15:45.056 "superblock": true, 00:15:45.056 "num_base_bdevs": 3, 00:15:45.056 "num_base_bdevs_discovered": 3, 00:15:45.056 "num_base_bdevs_operational": 3, 00:15:45.056 "process": { 00:15:45.056 "type": "rebuild", 00:15:45.056 "target": "spare", 00:15:45.056 "progress": { 00:15:45.056 "blocks": 20480, 00:15:45.056 "percent": 16 00:15:45.056 } 00:15:45.056 }, 00:15:45.056 "base_bdevs_list": [ 00:15:45.056 { 00:15:45.056 "name": "spare", 00:15:45.056 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:45.056 "is_configured": true, 00:15:45.056 "data_offset": 2048, 00:15:45.056 "data_size": 63488 00:15:45.056 }, 00:15:45.056 { 00:15:45.056 "name": "BaseBdev2", 00:15:45.056 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:45.056 "is_configured": true, 00:15:45.056 "data_offset": 2048, 00:15:45.056 "data_size": 63488 00:15:45.056 }, 00:15:45.056 { 00:15:45.056 "name": "BaseBdev3", 00:15:45.056 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:45.056 "is_configured": true, 00:15:45.056 "data_offset": 2048, 00:15:45.056 "data_size": 63488 00:15:45.056 } 00:15:45.056 ] 00:15:45.056 }' 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.056 [2024-11-18 04:04:41.581752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.056 [2024-11-18 04:04:41.633915] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:45.056 [2024-11-18 04:04:41.633968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.056 [2024-11-18 04:04:41.634001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:45.056 [2024-11-18 04:04:41.634008] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.056 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.316 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.317 "name": "raid_bdev1", 00:15:45.317 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:45.317 "strip_size_kb": 64, 00:15:45.317 "state": "online", 00:15:45.317 "raid_level": "raid5f", 00:15:45.317 "superblock": true, 00:15:45.317 "num_base_bdevs": 3, 00:15:45.317 "num_base_bdevs_discovered": 2, 00:15:45.317 "num_base_bdevs_operational": 2, 00:15:45.317 "base_bdevs_list": [ 00:15:45.317 { 00:15:45.317 "name": null, 00:15:45.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.317 "is_configured": false, 00:15:45.317 "data_offset": 0, 00:15:45.317 "data_size": 63488 00:15:45.317 }, 00:15:45.317 { 00:15:45.317 "name": "BaseBdev2", 00:15:45.317 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:45.317 "is_configured": true, 00:15:45.317 "data_offset": 2048, 00:15:45.317 "data_size": 63488 00:15:45.317 }, 00:15:45.317 { 00:15:45.317 "name": "BaseBdev3", 00:15:45.317 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:45.317 "is_configured": true, 00:15:45.317 "data_offset": 2048, 00:15:45.317 "data_size": 63488 00:15:45.317 } 00:15:45.317 ] 00:15:45.317 }' 00:15:45.317 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.317 04:04:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.576 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.576 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.577 "name": "raid_bdev1", 00:15:45.577 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:45.577 "strip_size_kb": 64, 00:15:45.577 "state": "online", 00:15:45.577 "raid_level": "raid5f", 00:15:45.577 "superblock": true, 00:15:45.577 "num_base_bdevs": 3, 00:15:45.577 "num_base_bdevs_discovered": 2, 00:15:45.577 "num_base_bdevs_operational": 2, 00:15:45.577 "base_bdevs_list": [ 00:15:45.577 { 00:15:45.577 "name": null, 00:15:45.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.577 "is_configured": false, 00:15:45.577 "data_offset": 0, 00:15:45.577 "data_size": 63488 00:15:45.577 }, 00:15:45.577 { 00:15:45.577 "name": "BaseBdev2", 00:15:45.577 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:45.577 "is_configured": true, 00:15:45.577 "data_offset": 2048, 00:15:45.577 "data_size": 63488 00:15:45.577 }, 00:15:45.577 { 00:15:45.577 "name": "BaseBdev3", 00:15:45.577 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:45.577 "is_configured": true, 00:15:45.577 "data_offset": 2048, 00:15:45.577 "data_size": 63488 00:15:45.577 } 00:15:45.577 ] 00:15:45.577 }' 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.577 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.837 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.837 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:45.837 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.837 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.837 [2024-11-18 04:04:42.246591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.837 [2024-11-18 04:04:42.261722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:45.837 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.837 04:04:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:45.837 [2024-11-18 04:04:42.268862] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.778 "name": "raid_bdev1", 00:15:46.778 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:46.778 "strip_size_kb": 64, 00:15:46.778 "state": "online", 00:15:46.778 "raid_level": "raid5f", 00:15:46.778 "superblock": true, 00:15:46.778 "num_base_bdevs": 3, 00:15:46.778 "num_base_bdevs_discovered": 3, 00:15:46.778 "num_base_bdevs_operational": 3, 00:15:46.778 "process": { 00:15:46.778 "type": "rebuild", 00:15:46.778 "target": "spare", 00:15:46.778 "progress": { 00:15:46.778 "blocks": 20480, 00:15:46.778 "percent": 16 00:15:46.778 } 00:15:46.778 }, 00:15:46.778 "base_bdevs_list": [ 00:15:46.778 { 00:15:46.778 "name": "spare", 00:15:46.778 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:46.778 "is_configured": true, 00:15:46.778 "data_offset": 2048, 00:15:46.778 "data_size": 63488 00:15:46.778 }, 00:15:46.778 { 00:15:46.778 "name": "BaseBdev2", 00:15:46.778 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:46.778 "is_configured": true, 00:15:46.778 "data_offset": 2048, 00:15:46.778 "data_size": 63488 00:15:46.778 }, 00:15:46.778 { 00:15:46.778 "name": "BaseBdev3", 00:15:46.778 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:46.778 "is_configured": true, 00:15:46.778 "data_offset": 2048, 00:15:46.778 "data_size": 63488 00:15:46.778 } 00:15:46.778 ] 00:15:46.778 }' 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.778 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:47.039 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=557 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.039 "name": "raid_bdev1", 00:15:47.039 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:47.039 "strip_size_kb": 64, 00:15:47.039 "state": "online", 00:15:47.039 "raid_level": "raid5f", 00:15:47.039 "superblock": true, 00:15:47.039 "num_base_bdevs": 3, 00:15:47.039 "num_base_bdevs_discovered": 3, 00:15:47.039 "num_base_bdevs_operational": 3, 00:15:47.039 "process": { 00:15:47.039 "type": "rebuild", 00:15:47.039 "target": "spare", 00:15:47.039 "progress": { 00:15:47.039 "blocks": 22528, 00:15:47.039 "percent": 17 00:15:47.039 } 00:15:47.039 }, 00:15:47.039 "base_bdevs_list": [ 00:15:47.039 { 00:15:47.039 "name": "spare", 00:15:47.039 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:47.039 "is_configured": true, 00:15:47.039 "data_offset": 2048, 00:15:47.039 "data_size": 63488 00:15:47.039 }, 00:15:47.039 { 00:15:47.039 "name": "BaseBdev2", 00:15:47.039 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:47.039 "is_configured": true, 00:15:47.039 "data_offset": 2048, 00:15:47.039 "data_size": 63488 00:15:47.039 }, 00:15:47.039 { 00:15:47.039 "name": "BaseBdev3", 00:15:47.039 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:47.039 "is_configured": true, 00:15:47.039 "data_offset": 2048, 00:15:47.039 "data_size": 63488 00:15:47.039 } 00:15:47.039 ] 00:15:47.039 }' 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.039 04:04:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.980 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.980 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.980 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.980 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.980 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.980 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.980 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.980 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.980 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.980 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.980 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.240 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.240 "name": "raid_bdev1", 00:15:48.240 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:48.240 "strip_size_kb": 64, 00:15:48.240 "state": "online", 00:15:48.240 "raid_level": "raid5f", 00:15:48.240 "superblock": true, 00:15:48.240 "num_base_bdevs": 3, 00:15:48.240 "num_base_bdevs_discovered": 3, 00:15:48.240 "num_base_bdevs_operational": 3, 00:15:48.240 "process": { 00:15:48.240 "type": "rebuild", 00:15:48.240 "target": "spare", 00:15:48.240 "progress": { 00:15:48.240 "blocks": 47104, 00:15:48.240 "percent": 37 00:15:48.240 } 00:15:48.240 }, 00:15:48.240 "base_bdevs_list": [ 00:15:48.240 { 00:15:48.240 "name": "spare", 00:15:48.240 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:48.240 "is_configured": true, 00:15:48.240 "data_offset": 2048, 00:15:48.240 "data_size": 63488 00:15:48.240 }, 00:15:48.240 { 00:15:48.240 "name": "BaseBdev2", 00:15:48.240 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:48.240 "is_configured": true, 00:15:48.240 "data_offset": 2048, 00:15:48.240 "data_size": 63488 00:15:48.240 }, 00:15:48.240 { 00:15:48.240 "name": "BaseBdev3", 00:15:48.240 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:48.240 "is_configured": true, 00:15:48.240 "data_offset": 2048, 00:15:48.240 "data_size": 63488 00:15:48.240 } 00:15:48.240 ] 00:15:48.240 }' 00:15:48.240 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.240 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.240 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.240 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.240 04:04:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.186 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.186 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.186 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.186 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.186 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.186 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.186 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.186 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.186 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.186 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.187 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.187 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.187 "name": "raid_bdev1", 00:15:49.187 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:49.187 "strip_size_kb": 64, 00:15:49.187 "state": "online", 00:15:49.187 "raid_level": "raid5f", 00:15:49.187 "superblock": true, 00:15:49.187 "num_base_bdevs": 3, 00:15:49.187 "num_base_bdevs_discovered": 3, 00:15:49.187 "num_base_bdevs_operational": 3, 00:15:49.187 "process": { 00:15:49.187 "type": "rebuild", 00:15:49.187 "target": "spare", 00:15:49.187 "progress": { 00:15:49.187 "blocks": 69632, 00:15:49.187 "percent": 54 00:15:49.187 } 00:15:49.187 }, 00:15:49.187 "base_bdevs_list": [ 00:15:49.187 { 00:15:49.187 "name": "spare", 00:15:49.187 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:49.187 "is_configured": true, 00:15:49.187 "data_offset": 2048, 00:15:49.187 "data_size": 63488 00:15:49.187 }, 00:15:49.187 { 00:15:49.187 "name": "BaseBdev2", 00:15:49.187 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:49.187 "is_configured": true, 00:15:49.187 "data_offset": 2048, 00:15:49.187 "data_size": 63488 00:15:49.187 }, 00:15:49.187 { 00:15:49.187 "name": "BaseBdev3", 00:15:49.187 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:49.187 "is_configured": true, 00:15:49.187 "data_offset": 2048, 00:15:49.187 "data_size": 63488 00:15:49.187 } 00:15:49.187 ] 00:15:49.187 }' 00:15:49.187 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.187 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.447 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.447 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.447 04:04:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.387 "name": "raid_bdev1", 00:15:50.387 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:50.387 "strip_size_kb": 64, 00:15:50.387 "state": "online", 00:15:50.387 "raid_level": "raid5f", 00:15:50.387 "superblock": true, 00:15:50.387 "num_base_bdevs": 3, 00:15:50.387 "num_base_bdevs_discovered": 3, 00:15:50.387 "num_base_bdevs_operational": 3, 00:15:50.387 "process": { 00:15:50.387 "type": "rebuild", 00:15:50.387 "target": "spare", 00:15:50.387 "progress": { 00:15:50.387 "blocks": 92160, 00:15:50.387 "percent": 72 00:15:50.387 } 00:15:50.387 }, 00:15:50.387 "base_bdevs_list": [ 00:15:50.387 { 00:15:50.387 "name": "spare", 00:15:50.387 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:50.387 "is_configured": true, 00:15:50.387 "data_offset": 2048, 00:15:50.387 "data_size": 63488 00:15:50.387 }, 00:15:50.387 { 00:15:50.387 "name": "BaseBdev2", 00:15:50.387 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:50.387 "is_configured": true, 00:15:50.387 "data_offset": 2048, 00:15:50.387 "data_size": 63488 00:15:50.387 }, 00:15:50.387 { 00:15:50.387 "name": "BaseBdev3", 00:15:50.387 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:50.387 "is_configured": true, 00:15:50.387 "data_offset": 2048, 00:15:50.387 "data_size": 63488 00:15:50.387 } 00:15:50.387 ] 00:15:50.387 }' 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.387 04:04:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.647 04:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.647 04:04:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.584 "name": "raid_bdev1", 00:15:51.584 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:51.584 "strip_size_kb": 64, 00:15:51.584 "state": "online", 00:15:51.584 "raid_level": "raid5f", 00:15:51.584 "superblock": true, 00:15:51.584 "num_base_bdevs": 3, 00:15:51.584 "num_base_bdevs_discovered": 3, 00:15:51.584 "num_base_bdevs_operational": 3, 00:15:51.584 "process": { 00:15:51.584 "type": "rebuild", 00:15:51.584 "target": "spare", 00:15:51.584 "progress": { 00:15:51.584 "blocks": 116736, 00:15:51.584 "percent": 91 00:15:51.584 } 00:15:51.584 }, 00:15:51.584 "base_bdevs_list": [ 00:15:51.584 { 00:15:51.584 "name": "spare", 00:15:51.584 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:51.584 "is_configured": true, 00:15:51.584 "data_offset": 2048, 00:15:51.584 "data_size": 63488 00:15:51.584 }, 00:15:51.584 { 00:15:51.584 "name": "BaseBdev2", 00:15:51.584 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:51.584 "is_configured": true, 00:15:51.584 "data_offset": 2048, 00:15:51.584 "data_size": 63488 00:15:51.584 }, 00:15:51.584 { 00:15:51.584 "name": "BaseBdev3", 00:15:51.584 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:51.584 "is_configured": true, 00:15:51.584 "data_offset": 2048, 00:15:51.584 "data_size": 63488 00:15:51.584 } 00:15:51.584 ] 00:15:51.584 }' 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.584 04:04:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:52.154 [2024-11-18 04:04:48.503460] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:52.154 [2024-11-18 04:04:48.503556] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:52.154 [2024-11-18 04:04:48.503653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.725 "name": "raid_bdev1", 00:15:52.725 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:52.725 "strip_size_kb": 64, 00:15:52.725 "state": "online", 00:15:52.725 "raid_level": "raid5f", 00:15:52.725 "superblock": true, 00:15:52.725 "num_base_bdevs": 3, 00:15:52.725 "num_base_bdevs_discovered": 3, 00:15:52.725 "num_base_bdevs_operational": 3, 00:15:52.725 "base_bdevs_list": [ 00:15:52.725 { 00:15:52.725 "name": "spare", 00:15:52.725 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:52.725 "is_configured": true, 00:15:52.725 "data_offset": 2048, 00:15:52.725 "data_size": 63488 00:15:52.725 }, 00:15:52.725 { 00:15:52.725 "name": "BaseBdev2", 00:15:52.725 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:52.725 "is_configured": true, 00:15:52.725 "data_offset": 2048, 00:15:52.725 "data_size": 63488 00:15:52.725 }, 00:15:52.725 { 00:15:52.725 "name": "BaseBdev3", 00:15:52.725 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:52.725 "is_configured": true, 00:15:52.725 "data_offset": 2048, 00:15:52.725 "data_size": 63488 00:15:52.725 } 00:15:52.725 ] 00:15:52.725 }' 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.725 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.985 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.985 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.985 "name": "raid_bdev1", 00:15:52.985 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:52.985 "strip_size_kb": 64, 00:15:52.985 "state": "online", 00:15:52.985 "raid_level": "raid5f", 00:15:52.985 "superblock": true, 00:15:52.985 "num_base_bdevs": 3, 00:15:52.985 "num_base_bdevs_discovered": 3, 00:15:52.985 "num_base_bdevs_operational": 3, 00:15:52.985 "base_bdevs_list": [ 00:15:52.985 { 00:15:52.985 "name": "spare", 00:15:52.985 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:52.985 "is_configured": true, 00:15:52.985 "data_offset": 2048, 00:15:52.985 "data_size": 63488 00:15:52.985 }, 00:15:52.985 { 00:15:52.985 "name": "BaseBdev2", 00:15:52.985 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:52.985 "is_configured": true, 00:15:52.985 "data_offset": 2048, 00:15:52.985 "data_size": 63488 00:15:52.985 }, 00:15:52.985 { 00:15:52.985 "name": "BaseBdev3", 00:15:52.985 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:52.985 "is_configured": true, 00:15:52.985 "data_offset": 2048, 00:15:52.985 "data_size": 63488 00:15:52.985 } 00:15:52.985 ] 00:15:52.985 }' 00:15:52.985 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.986 "name": "raid_bdev1", 00:15:52.986 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:52.986 "strip_size_kb": 64, 00:15:52.986 "state": "online", 00:15:52.986 "raid_level": "raid5f", 00:15:52.986 "superblock": true, 00:15:52.986 "num_base_bdevs": 3, 00:15:52.986 "num_base_bdevs_discovered": 3, 00:15:52.986 "num_base_bdevs_operational": 3, 00:15:52.986 "base_bdevs_list": [ 00:15:52.986 { 00:15:52.986 "name": "spare", 00:15:52.986 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:52.986 "is_configured": true, 00:15:52.986 "data_offset": 2048, 00:15:52.986 "data_size": 63488 00:15:52.986 }, 00:15:52.986 { 00:15:52.986 "name": "BaseBdev2", 00:15:52.986 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:52.986 "is_configured": true, 00:15:52.986 "data_offset": 2048, 00:15:52.986 "data_size": 63488 00:15:52.986 }, 00:15:52.986 { 00:15:52.986 "name": "BaseBdev3", 00:15:52.986 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:52.986 "is_configured": true, 00:15:52.986 "data_offset": 2048, 00:15:52.986 "data_size": 63488 00:15:52.986 } 00:15:52.986 ] 00:15:52.986 }' 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.986 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.554 [2024-11-18 04:04:49.905676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.554 [2024-11-18 04:04:49.905744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.554 [2024-11-18 04:04:49.905881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.554 [2024-11-18 04:04:49.905980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.554 [2024-11-18 04:04:49.906056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:53.554 04:04:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:53.554 /dev/nbd0 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:53.554 1+0 records in 00:15:53.554 1+0 records out 00:15:53.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517912 s, 7.9 MB/s 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:53.554 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:53.813 /dev/nbd1 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:53.813 1+0 records in 00:15:53.813 1+0 records out 00:15:53.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392804 s, 10.4 MB/s 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:53.813 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:54.072 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:54.072 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:54.072 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:54.072 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:54.072 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:54.072 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.072 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:54.330 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:54.330 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:54.330 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:54.330 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.330 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.330 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:54.330 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:54.330 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.330 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.330 04:04:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:54.589 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:54.589 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.590 [2024-11-18 04:04:51.021661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:54.590 [2024-11-18 04:04:51.021762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.590 [2024-11-18 04:04:51.021820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:54.590 [2024-11-18 04:04:51.021876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.590 [2024-11-18 04:04:51.024156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.590 [2024-11-18 04:04:51.024232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:54.590 [2024-11-18 04:04:51.024367] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:54.590 [2024-11-18 04:04:51.024462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:54.590 [2024-11-18 04:04:51.024669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.590 [2024-11-18 04:04:51.024842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.590 spare 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.590 [2024-11-18 04:04:51.124780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:54.590 [2024-11-18 04:04:51.124871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:54.590 [2024-11-18 04:04:51.125162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:54.590 [2024-11-18 04:04:51.130384] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:54.590 [2024-11-18 04:04:51.130435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:54.590 [2024-11-18 04:04:51.130655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.590 "name": "raid_bdev1", 00:15:54.590 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:54.590 "strip_size_kb": 64, 00:15:54.590 "state": "online", 00:15:54.590 "raid_level": "raid5f", 00:15:54.590 "superblock": true, 00:15:54.590 "num_base_bdevs": 3, 00:15:54.590 "num_base_bdevs_discovered": 3, 00:15:54.590 "num_base_bdevs_operational": 3, 00:15:54.590 "base_bdevs_list": [ 00:15:54.590 { 00:15:54.590 "name": "spare", 00:15:54.590 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:54.590 "is_configured": true, 00:15:54.590 "data_offset": 2048, 00:15:54.590 "data_size": 63488 00:15:54.590 }, 00:15:54.590 { 00:15:54.590 "name": "BaseBdev2", 00:15:54.590 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:54.590 "is_configured": true, 00:15:54.590 "data_offset": 2048, 00:15:54.590 "data_size": 63488 00:15:54.590 }, 00:15:54.590 { 00:15:54.590 "name": "BaseBdev3", 00:15:54.590 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:54.590 "is_configured": true, 00:15:54.590 "data_offset": 2048, 00:15:54.590 "data_size": 63488 00:15:54.590 } 00:15:54.590 ] 00:15:54.590 }' 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.590 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.158 "name": "raid_bdev1", 00:15:55.158 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:55.158 "strip_size_kb": 64, 00:15:55.158 "state": "online", 00:15:55.158 "raid_level": "raid5f", 00:15:55.158 "superblock": true, 00:15:55.158 "num_base_bdevs": 3, 00:15:55.158 "num_base_bdevs_discovered": 3, 00:15:55.158 "num_base_bdevs_operational": 3, 00:15:55.158 "base_bdevs_list": [ 00:15:55.158 { 00:15:55.158 "name": "spare", 00:15:55.158 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:55.158 "is_configured": true, 00:15:55.158 "data_offset": 2048, 00:15:55.158 "data_size": 63488 00:15:55.158 }, 00:15:55.158 { 00:15:55.158 "name": "BaseBdev2", 00:15:55.158 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:55.158 "is_configured": true, 00:15:55.158 "data_offset": 2048, 00:15:55.158 "data_size": 63488 00:15:55.158 }, 00:15:55.158 { 00:15:55.158 "name": "BaseBdev3", 00:15:55.158 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:55.158 "is_configured": true, 00:15:55.158 "data_offset": 2048, 00:15:55.158 "data_size": 63488 00:15:55.158 } 00:15:55.158 ] 00:15:55.158 }' 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:55.158 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.418 [2024-11-18 04:04:51.807936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.418 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.419 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.419 "name": "raid_bdev1", 00:15:55.419 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:55.419 "strip_size_kb": 64, 00:15:55.419 "state": "online", 00:15:55.419 "raid_level": "raid5f", 00:15:55.419 "superblock": true, 00:15:55.419 "num_base_bdevs": 3, 00:15:55.419 "num_base_bdevs_discovered": 2, 00:15:55.419 "num_base_bdevs_operational": 2, 00:15:55.419 "base_bdevs_list": [ 00:15:55.419 { 00:15:55.419 "name": null, 00:15:55.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.419 "is_configured": false, 00:15:55.419 "data_offset": 0, 00:15:55.419 "data_size": 63488 00:15:55.419 }, 00:15:55.419 { 00:15:55.419 "name": "BaseBdev2", 00:15:55.419 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:55.419 "is_configured": true, 00:15:55.419 "data_offset": 2048, 00:15:55.419 "data_size": 63488 00:15:55.419 }, 00:15:55.419 { 00:15:55.419 "name": "BaseBdev3", 00:15:55.419 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:55.419 "is_configured": true, 00:15:55.419 "data_offset": 2048, 00:15:55.419 "data_size": 63488 00:15:55.419 } 00:15:55.419 ] 00:15:55.419 }' 00:15:55.419 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.419 04:04:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.678 04:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:55.678 04:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.678 04:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.678 [2024-11-18 04:04:52.247270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.678 [2024-11-18 04:04:52.247493] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:55.678 [2024-11-18 04:04:52.247544] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:55.678 [2024-11-18 04:04:52.247578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.678 [2024-11-18 04:04:52.262456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:55.678 04:04:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.678 04:04:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:55.678 [2024-11-18 04:04:52.269677] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:57.055 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.055 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.055 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.055 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.055 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.055 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.056 "name": "raid_bdev1", 00:15:57.056 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:57.056 "strip_size_kb": 64, 00:15:57.056 "state": "online", 00:15:57.056 "raid_level": "raid5f", 00:15:57.056 "superblock": true, 00:15:57.056 "num_base_bdevs": 3, 00:15:57.056 "num_base_bdevs_discovered": 3, 00:15:57.056 "num_base_bdevs_operational": 3, 00:15:57.056 "process": { 00:15:57.056 "type": "rebuild", 00:15:57.056 "target": "spare", 00:15:57.056 "progress": { 00:15:57.056 "blocks": 20480, 00:15:57.056 "percent": 16 00:15:57.056 } 00:15:57.056 }, 00:15:57.056 "base_bdevs_list": [ 00:15:57.056 { 00:15:57.056 "name": "spare", 00:15:57.056 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:57.056 "is_configured": true, 00:15:57.056 "data_offset": 2048, 00:15:57.056 "data_size": 63488 00:15:57.056 }, 00:15:57.056 { 00:15:57.056 "name": "BaseBdev2", 00:15:57.056 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:57.056 "is_configured": true, 00:15:57.056 "data_offset": 2048, 00:15:57.056 "data_size": 63488 00:15:57.056 }, 00:15:57.056 { 00:15:57.056 "name": "BaseBdev3", 00:15:57.056 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:57.056 "is_configured": true, 00:15:57.056 "data_offset": 2048, 00:15:57.056 "data_size": 63488 00:15:57.056 } 00:15:57.056 ] 00:15:57.056 }' 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.056 [2024-11-18 04:04:53.420909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.056 [2024-11-18 04:04:53.476946] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:57.056 [2024-11-18 04:04:53.477020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.056 [2024-11-18 04:04:53.477036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:57.056 [2024-11-18 04:04:53.477044] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.056 "name": "raid_bdev1", 00:15:57.056 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:57.056 "strip_size_kb": 64, 00:15:57.056 "state": "online", 00:15:57.056 "raid_level": "raid5f", 00:15:57.056 "superblock": true, 00:15:57.056 "num_base_bdevs": 3, 00:15:57.056 "num_base_bdevs_discovered": 2, 00:15:57.056 "num_base_bdevs_operational": 2, 00:15:57.056 "base_bdevs_list": [ 00:15:57.056 { 00:15:57.056 "name": null, 00:15:57.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.056 "is_configured": false, 00:15:57.056 "data_offset": 0, 00:15:57.056 "data_size": 63488 00:15:57.056 }, 00:15:57.056 { 00:15:57.056 "name": "BaseBdev2", 00:15:57.056 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:57.056 "is_configured": true, 00:15:57.056 "data_offset": 2048, 00:15:57.056 "data_size": 63488 00:15:57.056 }, 00:15:57.056 { 00:15:57.056 "name": "BaseBdev3", 00:15:57.056 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:57.056 "is_configured": true, 00:15:57.056 "data_offset": 2048, 00:15:57.056 "data_size": 63488 00:15:57.056 } 00:15:57.056 ] 00:15:57.056 }' 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.056 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.315 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:57.315 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.315 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.574 [2024-11-18 04:04:53.957311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:57.574 [2024-11-18 04:04:53.957409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.574 [2024-11-18 04:04:53.957463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:57.574 [2024-11-18 04:04:53.957500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.574 [2024-11-18 04:04:53.957981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.574 [2024-11-18 04:04:53.958040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:57.574 [2024-11-18 04:04:53.958166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:57.574 [2024-11-18 04:04:53.958209] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:57.574 [2024-11-18 04:04:53.958251] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:57.574 [2024-11-18 04:04:53.958312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:57.574 [2024-11-18 04:04:53.973039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:57.574 spare 00:15:57.574 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.574 04:04:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:57.574 [2024-11-18 04:04:53.979989] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:58.513 04:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.513 04:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.513 04:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.513 04:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.513 04:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.513 04:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.513 04:04:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.513 04:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.513 04:04:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.513 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.513 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.513 "name": "raid_bdev1", 00:15:58.513 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:58.513 "strip_size_kb": 64, 00:15:58.513 "state": "online", 00:15:58.513 "raid_level": "raid5f", 00:15:58.513 "superblock": true, 00:15:58.513 "num_base_bdevs": 3, 00:15:58.513 "num_base_bdevs_discovered": 3, 00:15:58.513 "num_base_bdevs_operational": 3, 00:15:58.513 "process": { 00:15:58.513 "type": "rebuild", 00:15:58.513 "target": "spare", 00:15:58.513 "progress": { 00:15:58.513 "blocks": 20480, 00:15:58.513 "percent": 16 00:15:58.513 } 00:15:58.513 }, 00:15:58.513 "base_bdevs_list": [ 00:15:58.513 { 00:15:58.513 "name": "spare", 00:15:58.513 "uuid": "514cfe77-6577-5507-8904-a9291310b220", 00:15:58.513 "is_configured": true, 00:15:58.513 "data_offset": 2048, 00:15:58.513 "data_size": 63488 00:15:58.513 }, 00:15:58.513 { 00:15:58.513 "name": "BaseBdev2", 00:15:58.513 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:58.513 "is_configured": true, 00:15:58.513 "data_offset": 2048, 00:15:58.513 "data_size": 63488 00:15:58.513 }, 00:15:58.513 { 00:15:58.513 "name": "BaseBdev3", 00:15:58.513 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:58.513 "is_configured": true, 00:15:58.513 "data_offset": 2048, 00:15:58.513 "data_size": 63488 00:15:58.513 } 00:15:58.513 ] 00:15:58.513 }' 00:15:58.513 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.513 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.513 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.513 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.513 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:58.513 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.513 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.513 [2024-11-18 04:04:55.123080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.773 [2024-11-18 04:04:55.187138] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:58.773 [2024-11-18 04:04:55.187224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.773 [2024-11-18 04:04:55.187259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:58.773 [2024-11-18 04:04:55.187267] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.773 "name": "raid_bdev1", 00:15:58.773 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:58.773 "strip_size_kb": 64, 00:15:58.773 "state": "online", 00:15:58.773 "raid_level": "raid5f", 00:15:58.773 "superblock": true, 00:15:58.773 "num_base_bdevs": 3, 00:15:58.773 "num_base_bdevs_discovered": 2, 00:15:58.773 "num_base_bdevs_operational": 2, 00:15:58.773 "base_bdevs_list": [ 00:15:58.773 { 00:15:58.773 "name": null, 00:15:58.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.773 "is_configured": false, 00:15:58.773 "data_offset": 0, 00:15:58.773 "data_size": 63488 00:15:58.773 }, 00:15:58.773 { 00:15:58.773 "name": "BaseBdev2", 00:15:58.773 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:58.773 "is_configured": true, 00:15:58.773 "data_offset": 2048, 00:15:58.773 "data_size": 63488 00:15:58.773 }, 00:15:58.773 { 00:15:58.773 "name": "BaseBdev3", 00:15:58.773 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:58.773 "is_configured": true, 00:15:58.773 "data_offset": 2048, 00:15:58.773 "data_size": 63488 00:15:58.773 } 00:15:58.773 ] 00:15:58.773 }' 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.773 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.341 "name": "raid_bdev1", 00:15:59.341 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:15:59.341 "strip_size_kb": 64, 00:15:59.341 "state": "online", 00:15:59.341 "raid_level": "raid5f", 00:15:59.341 "superblock": true, 00:15:59.341 "num_base_bdevs": 3, 00:15:59.341 "num_base_bdevs_discovered": 2, 00:15:59.341 "num_base_bdevs_operational": 2, 00:15:59.341 "base_bdevs_list": [ 00:15:59.341 { 00:15:59.341 "name": null, 00:15:59.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.341 "is_configured": false, 00:15:59.341 "data_offset": 0, 00:15:59.341 "data_size": 63488 00:15:59.341 }, 00:15:59.341 { 00:15:59.341 "name": "BaseBdev2", 00:15:59.341 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:15:59.341 "is_configured": true, 00:15:59.341 "data_offset": 2048, 00:15:59.341 "data_size": 63488 00:15:59.341 }, 00:15:59.341 { 00:15:59.341 "name": "BaseBdev3", 00:15:59.341 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:15:59.341 "is_configured": true, 00:15:59.341 "data_offset": 2048, 00:15:59.341 "data_size": 63488 00:15:59.341 } 00:15:59.341 ] 00:15:59.341 }' 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.341 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.341 [2024-11-18 04:04:55.851662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:59.341 [2024-11-18 04:04:55.851713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.341 [2024-11-18 04:04:55.851752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:59.341 [2024-11-18 04:04:55.851761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.341 [2024-11-18 04:04:55.852215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.341 [2024-11-18 04:04:55.852232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.342 [2024-11-18 04:04:55.852308] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:59.342 [2024-11-18 04:04:55.852321] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:59.342 [2024-11-18 04:04:55.852346] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:59.342 [2024-11-18 04:04:55.852356] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:59.342 BaseBdev1 00:15:59.342 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.342 04:04:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.279 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.279 "name": "raid_bdev1", 00:16:00.279 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:16:00.279 "strip_size_kb": 64, 00:16:00.279 "state": "online", 00:16:00.279 "raid_level": "raid5f", 00:16:00.279 "superblock": true, 00:16:00.279 "num_base_bdevs": 3, 00:16:00.279 "num_base_bdevs_discovered": 2, 00:16:00.279 "num_base_bdevs_operational": 2, 00:16:00.279 "base_bdevs_list": [ 00:16:00.279 { 00:16:00.280 "name": null, 00:16:00.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.280 "is_configured": false, 00:16:00.280 "data_offset": 0, 00:16:00.280 "data_size": 63488 00:16:00.280 }, 00:16:00.280 { 00:16:00.280 "name": "BaseBdev2", 00:16:00.280 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:16:00.280 "is_configured": true, 00:16:00.280 "data_offset": 2048, 00:16:00.280 "data_size": 63488 00:16:00.280 }, 00:16:00.280 { 00:16:00.280 "name": "BaseBdev3", 00:16:00.280 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:16:00.280 "is_configured": true, 00:16:00.280 "data_offset": 2048, 00:16:00.280 "data_size": 63488 00:16:00.280 } 00:16:00.280 ] 00:16:00.280 }' 00:16:00.280 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.280 04:04:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.848 "name": "raid_bdev1", 00:16:00.848 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:16:00.848 "strip_size_kb": 64, 00:16:00.848 "state": "online", 00:16:00.848 "raid_level": "raid5f", 00:16:00.848 "superblock": true, 00:16:00.848 "num_base_bdevs": 3, 00:16:00.848 "num_base_bdevs_discovered": 2, 00:16:00.848 "num_base_bdevs_operational": 2, 00:16:00.848 "base_bdevs_list": [ 00:16:00.848 { 00:16:00.848 "name": null, 00:16:00.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.848 "is_configured": false, 00:16:00.848 "data_offset": 0, 00:16:00.848 "data_size": 63488 00:16:00.848 }, 00:16:00.848 { 00:16:00.848 "name": "BaseBdev2", 00:16:00.848 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:16:00.848 "is_configured": true, 00:16:00.848 "data_offset": 2048, 00:16:00.848 "data_size": 63488 00:16:00.848 }, 00:16:00.848 { 00:16:00.848 "name": "BaseBdev3", 00:16:00.848 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:16:00.848 "is_configured": true, 00:16:00.848 "data_offset": 2048, 00:16:00.848 "data_size": 63488 00:16:00.848 } 00:16:00.848 ] 00:16:00.848 }' 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.848 [2024-11-18 04:04:57.428975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.848 [2024-11-18 04:04:57.429174] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:00.848 [2024-11-18 04:04:57.429233] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:00.848 request: 00:16:00.848 { 00:16:00.848 "base_bdev": "BaseBdev1", 00:16:00.848 "raid_bdev": "raid_bdev1", 00:16:00.848 "method": "bdev_raid_add_base_bdev", 00:16:00.848 "req_id": 1 00:16:00.848 } 00:16:00.848 Got JSON-RPC error response 00:16:00.848 response: 00:16:00.848 { 00:16:00.848 "code": -22, 00:16:00.848 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:00.848 } 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:00.848 04:04:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.225 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.225 "name": "raid_bdev1", 00:16:02.225 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:16:02.225 "strip_size_kb": 64, 00:16:02.225 "state": "online", 00:16:02.225 "raid_level": "raid5f", 00:16:02.225 "superblock": true, 00:16:02.225 "num_base_bdevs": 3, 00:16:02.225 "num_base_bdevs_discovered": 2, 00:16:02.225 "num_base_bdevs_operational": 2, 00:16:02.225 "base_bdevs_list": [ 00:16:02.225 { 00:16:02.225 "name": null, 00:16:02.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.225 "is_configured": false, 00:16:02.225 "data_offset": 0, 00:16:02.225 "data_size": 63488 00:16:02.225 }, 00:16:02.225 { 00:16:02.225 "name": "BaseBdev2", 00:16:02.226 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:16:02.226 "is_configured": true, 00:16:02.226 "data_offset": 2048, 00:16:02.226 "data_size": 63488 00:16:02.226 }, 00:16:02.226 { 00:16:02.226 "name": "BaseBdev3", 00:16:02.226 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:16:02.226 "is_configured": true, 00:16:02.226 "data_offset": 2048, 00:16:02.226 "data_size": 63488 00:16:02.226 } 00:16:02.226 ] 00:16:02.226 }' 00:16:02.226 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.226 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.485 "name": "raid_bdev1", 00:16:02.485 "uuid": "075f7523-ba35-4a41-987e-0ec241ea6f49", 00:16:02.485 "strip_size_kb": 64, 00:16:02.485 "state": "online", 00:16:02.485 "raid_level": "raid5f", 00:16:02.485 "superblock": true, 00:16:02.485 "num_base_bdevs": 3, 00:16:02.485 "num_base_bdevs_discovered": 2, 00:16:02.485 "num_base_bdevs_operational": 2, 00:16:02.485 "base_bdevs_list": [ 00:16:02.485 { 00:16:02.485 "name": null, 00:16:02.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.485 "is_configured": false, 00:16:02.485 "data_offset": 0, 00:16:02.485 "data_size": 63488 00:16:02.485 }, 00:16:02.485 { 00:16:02.485 "name": "BaseBdev2", 00:16:02.485 "uuid": "b5ae8028-124d-5dc5-8a20-a5ebfeb1d533", 00:16:02.485 "is_configured": true, 00:16:02.485 "data_offset": 2048, 00:16:02.485 "data_size": 63488 00:16:02.485 }, 00:16:02.485 { 00:16:02.485 "name": "BaseBdev3", 00:16:02.485 "uuid": "d5bfa121-b355-5b92-ae5e-f2e96864e254", 00:16:02.485 "is_configured": true, 00:16:02.485 "data_offset": 2048, 00:16:02.485 "data_size": 63488 00:16:02.485 } 00:16:02.485 ] 00:16:02.485 }' 00:16:02.485 04:04:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81917 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81917 ']' 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81917 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81917 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81917' 00:16:02.485 killing process with pid 81917 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81917 00:16:02.485 Received shutdown signal, test time was about 60.000000 seconds 00:16:02.485 00:16:02.485 Latency(us) 00:16:02.485 [2024-11-18T04:04:59.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.485 [2024-11-18T04:04:59.126Z] =================================================================================================================== 00:16:02.485 [2024-11-18T04:04:59.126Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:02.485 [2024-11-18 04:04:59.114038] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.485 04:04:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81917 00:16:02.485 [2024-11-18 04:04:59.114193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.485 [2024-11-18 04:04:59.114256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.485 [2024-11-18 04:04:59.114268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:03.067 [2024-11-18 04:04:59.479493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:04.004 04:05:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:04.004 00:16:04.004 real 0m23.045s 00:16:04.004 user 0m29.632s 00:16:04.004 sys 0m2.697s 00:16:04.004 04:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.004 04:05:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.004 ************************************ 00:16:04.004 END TEST raid5f_rebuild_test_sb 00:16:04.004 ************************************ 00:16:04.004 04:05:00 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:04.004 04:05:00 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:04.004 04:05:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:04.004 04:05:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.004 04:05:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:04.004 ************************************ 00:16:04.004 START TEST raid5f_state_function_test 00:16:04.005 ************************************ 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82660 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82660' 00:16:04.005 Process raid pid: 82660 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82660 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82660 ']' 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.005 04:05:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.264 [2024-11-18 04:05:00.658435] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:04.264 [2024-11-18 04:05:00.658615] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.264 [2024-11-18 04:05:00.810742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.523 [2024-11-18 04:05:00.916666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.523 [2024-11-18 04:05:01.112102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.523 [2024-11-18 04:05:01.112184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.090 [2024-11-18 04:05:01.485740] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.090 [2024-11-18 04:05:01.485840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.090 [2024-11-18 04:05:01.485890] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.090 [2024-11-18 04:05:01.485914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.090 [2024-11-18 04:05:01.485933] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.090 [2024-11-18 04:05:01.485953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.090 [2024-11-18 04:05:01.485970] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:05.090 [2024-11-18 04:05:01.485990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.090 "name": "Existed_Raid", 00:16:05.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.090 "strip_size_kb": 64, 00:16:05.090 "state": "configuring", 00:16:05.090 "raid_level": "raid5f", 00:16:05.090 "superblock": false, 00:16:05.090 "num_base_bdevs": 4, 00:16:05.090 "num_base_bdevs_discovered": 0, 00:16:05.090 "num_base_bdevs_operational": 4, 00:16:05.090 "base_bdevs_list": [ 00:16:05.090 { 00:16:05.090 "name": "BaseBdev1", 00:16:05.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.090 "is_configured": false, 00:16:05.090 "data_offset": 0, 00:16:05.090 "data_size": 0 00:16:05.090 }, 00:16:05.090 { 00:16:05.090 "name": "BaseBdev2", 00:16:05.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.090 "is_configured": false, 00:16:05.090 "data_offset": 0, 00:16:05.090 "data_size": 0 00:16:05.090 }, 00:16:05.090 { 00:16:05.090 "name": "BaseBdev3", 00:16:05.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.090 "is_configured": false, 00:16:05.090 "data_offset": 0, 00:16:05.090 "data_size": 0 00:16:05.090 }, 00:16:05.090 { 00:16:05.090 "name": "BaseBdev4", 00:16:05.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.090 "is_configured": false, 00:16:05.090 "data_offset": 0, 00:16:05.090 "data_size": 0 00:16:05.090 } 00:16:05.090 ] 00:16:05.090 }' 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.090 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.350 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:05.350 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.350 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.350 [2024-11-18 04:05:01.940895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.350 [2024-11-18 04:05:01.940963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:05.350 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.350 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:05.350 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.350 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.350 [2024-11-18 04:05:01.952882] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.350 [2024-11-18 04:05:01.952951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.350 [2024-11-18 04:05:01.952976] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.350 [2024-11-18 04:05:01.952998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.350 [2024-11-18 04:05:01.953015] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.350 [2024-11-18 04:05:01.953034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.350 [2024-11-18 04:05:01.953051] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:05.350 [2024-11-18 04:05:01.953087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:05.350 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.350 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:05.350 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.350 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.609 [2024-11-18 04:05:01.998038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.610 BaseBdev1 00:16:05.610 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.610 04:05:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:05.610 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:05.610 04:05:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.610 [ 00:16:05.610 { 00:16:05.610 "name": "BaseBdev1", 00:16:05.610 "aliases": [ 00:16:05.610 "51416f74-cf3b-4747-bd50-f7c8a7c44119" 00:16:05.610 ], 00:16:05.610 "product_name": "Malloc disk", 00:16:05.610 "block_size": 512, 00:16:05.610 "num_blocks": 65536, 00:16:05.610 "uuid": "51416f74-cf3b-4747-bd50-f7c8a7c44119", 00:16:05.610 "assigned_rate_limits": { 00:16:05.610 "rw_ios_per_sec": 0, 00:16:05.610 "rw_mbytes_per_sec": 0, 00:16:05.610 "r_mbytes_per_sec": 0, 00:16:05.610 "w_mbytes_per_sec": 0 00:16:05.610 }, 00:16:05.610 "claimed": true, 00:16:05.610 "claim_type": "exclusive_write", 00:16:05.610 "zoned": false, 00:16:05.610 "supported_io_types": { 00:16:05.610 "read": true, 00:16:05.610 "write": true, 00:16:05.610 "unmap": true, 00:16:05.610 "flush": true, 00:16:05.610 "reset": true, 00:16:05.610 "nvme_admin": false, 00:16:05.610 "nvme_io": false, 00:16:05.610 "nvme_io_md": false, 00:16:05.610 "write_zeroes": true, 00:16:05.610 "zcopy": true, 00:16:05.610 "get_zone_info": false, 00:16:05.610 "zone_management": false, 00:16:05.610 "zone_append": false, 00:16:05.610 "compare": false, 00:16:05.610 "compare_and_write": false, 00:16:05.610 "abort": true, 00:16:05.610 "seek_hole": false, 00:16:05.610 "seek_data": false, 00:16:05.610 "copy": true, 00:16:05.610 "nvme_iov_md": false 00:16:05.610 }, 00:16:05.610 "memory_domains": [ 00:16:05.610 { 00:16:05.610 "dma_device_id": "system", 00:16:05.610 "dma_device_type": 1 00:16:05.610 }, 00:16:05.610 { 00:16:05.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.610 "dma_device_type": 2 00:16:05.610 } 00:16:05.610 ], 00:16:05.610 "driver_specific": {} 00:16:05.610 } 00:16:05.610 ] 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.610 "name": "Existed_Raid", 00:16:05.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.610 "strip_size_kb": 64, 00:16:05.610 "state": "configuring", 00:16:05.610 "raid_level": "raid5f", 00:16:05.610 "superblock": false, 00:16:05.610 "num_base_bdevs": 4, 00:16:05.610 "num_base_bdevs_discovered": 1, 00:16:05.610 "num_base_bdevs_operational": 4, 00:16:05.610 "base_bdevs_list": [ 00:16:05.610 { 00:16:05.610 "name": "BaseBdev1", 00:16:05.610 "uuid": "51416f74-cf3b-4747-bd50-f7c8a7c44119", 00:16:05.610 "is_configured": true, 00:16:05.610 "data_offset": 0, 00:16:05.610 "data_size": 65536 00:16:05.610 }, 00:16:05.610 { 00:16:05.610 "name": "BaseBdev2", 00:16:05.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.610 "is_configured": false, 00:16:05.610 "data_offset": 0, 00:16:05.610 "data_size": 0 00:16:05.610 }, 00:16:05.610 { 00:16:05.610 "name": "BaseBdev3", 00:16:05.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.610 "is_configured": false, 00:16:05.610 "data_offset": 0, 00:16:05.610 "data_size": 0 00:16:05.610 }, 00:16:05.610 { 00:16:05.610 "name": "BaseBdev4", 00:16:05.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.610 "is_configured": false, 00:16:05.610 "data_offset": 0, 00:16:05.610 "data_size": 0 00:16:05.610 } 00:16:05.610 ] 00:16:05.610 }' 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.610 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.178 [2024-11-18 04:05:02.517184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.178 [2024-11-18 04:05:02.517286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.178 [2024-11-18 04:05:02.525220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.178 [2024-11-18 04:05:02.526952] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.178 [2024-11-18 04:05:02.527035] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.178 [2024-11-18 04:05:02.527063] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:06.178 [2024-11-18 04:05:02.527087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:06.178 [2024-11-18 04:05:02.527104] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:06.178 [2024-11-18 04:05:02.527125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.178 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.178 "name": "Existed_Raid", 00:16:06.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.178 "strip_size_kb": 64, 00:16:06.178 "state": "configuring", 00:16:06.178 "raid_level": "raid5f", 00:16:06.178 "superblock": false, 00:16:06.178 "num_base_bdevs": 4, 00:16:06.178 "num_base_bdevs_discovered": 1, 00:16:06.178 "num_base_bdevs_operational": 4, 00:16:06.178 "base_bdevs_list": [ 00:16:06.178 { 00:16:06.178 "name": "BaseBdev1", 00:16:06.178 "uuid": "51416f74-cf3b-4747-bd50-f7c8a7c44119", 00:16:06.178 "is_configured": true, 00:16:06.178 "data_offset": 0, 00:16:06.178 "data_size": 65536 00:16:06.178 }, 00:16:06.178 { 00:16:06.178 "name": "BaseBdev2", 00:16:06.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.178 "is_configured": false, 00:16:06.178 "data_offset": 0, 00:16:06.178 "data_size": 0 00:16:06.178 }, 00:16:06.178 { 00:16:06.178 "name": "BaseBdev3", 00:16:06.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.178 "is_configured": false, 00:16:06.178 "data_offset": 0, 00:16:06.178 "data_size": 0 00:16:06.178 }, 00:16:06.178 { 00:16:06.178 "name": "BaseBdev4", 00:16:06.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.178 "is_configured": false, 00:16:06.179 "data_offset": 0, 00:16:06.179 "data_size": 0 00:16:06.179 } 00:16:06.179 ] 00:16:06.179 }' 00:16:06.179 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.179 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.438 [2024-11-18 04:05:02.979866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.438 BaseBdev2 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.438 04:05:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.438 [ 00:16:06.438 { 00:16:06.438 "name": "BaseBdev2", 00:16:06.438 "aliases": [ 00:16:06.438 "530da87e-fbc3-4035-a6a3-2064d04c7cce" 00:16:06.438 ], 00:16:06.438 "product_name": "Malloc disk", 00:16:06.438 "block_size": 512, 00:16:06.438 "num_blocks": 65536, 00:16:06.438 "uuid": "530da87e-fbc3-4035-a6a3-2064d04c7cce", 00:16:06.438 "assigned_rate_limits": { 00:16:06.438 "rw_ios_per_sec": 0, 00:16:06.438 "rw_mbytes_per_sec": 0, 00:16:06.438 "r_mbytes_per_sec": 0, 00:16:06.438 "w_mbytes_per_sec": 0 00:16:06.438 }, 00:16:06.438 "claimed": true, 00:16:06.438 "claim_type": "exclusive_write", 00:16:06.438 "zoned": false, 00:16:06.438 "supported_io_types": { 00:16:06.438 "read": true, 00:16:06.438 "write": true, 00:16:06.438 "unmap": true, 00:16:06.438 "flush": true, 00:16:06.438 "reset": true, 00:16:06.438 "nvme_admin": false, 00:16:06.438 "nvme_io": false, 00:16:06.438 "nvme_io_md": false, 00:16:06.438 "write_zeroes": true, 00:16:06.438 "zcopy": true, 00:16:06.438 "get_zone_info": false, 00:16:06.438 "zone_management": false, 00:16:06.438 "zone_append": false, 00:16:06.438 "compare": false, 00:16:06.438 "compare_and_write": false, 00:16:06.438 "abort": true, 00:16:06.438 "seek_hole": false, 00:16:06.438 "seek_data": false, 00:16:06.438 "copy": true, 00:16:06.438 "nvme_iov_md": false 00:16:06.438 }, 00:16:06.438 "memory_domains": [ 00:16:06.438 { 00:16:06.438 "dma_device_id": "system", 00:16:06.438 "dma_device_type": 1 00:16:06.438 }, 00:16:06.438 { 00:16:06.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.438 "dma_device_type": 2 00:16:06.438 } 00:16:06.438 ], 00:16:06.438 "driver_specific": {} 00:16:06.438 } 00:16:06.438 ] 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.438 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.438 "name": "Existed_Raid", 00:16:06.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.438 "strip_size_kb": 64, 00:16:06.438 "state": "configuring", 00:16:06.438 "raid_level": "raid5f", 00:16:06.438 "superblock": false, 00:16:06.438 "num_base_bdevs": 4, 00:16:06.439 "num_base_bdevs_discovered": 2, 00:16:06.439 "num_base_bdevs_operational": 4, 00:16:06.439 "base_bdevs_list": [ 00:16:06.439 { 00:16:06.439 "name": "BaseBdev1", 00:16:06.439 "uuid": "51416f74-cf3b-4747-bd50-f7c8a7c44119", 00:16:06.439 "is_configured": true, 00:16:06.439 "data_offset": 0, 00:16:06.439 "data_size": 65536 00:16:06.439 }, 00:16:06.439 { 00:16:06.439 "name": "BaseBdev2", 00:16:06.439 "uuid": "530da87e-fbc3-4035-a6a3-2064d04c7cce", 00:16:06.439 "is_configured": true, 00:16:06.439 "data_offset": 0, 00:16:06.439 "data_size": 65536 00:16:06.439 }, 00:16:06.439 { 00:16:06.439 "name": "BaseBdev3", 00:16:06.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.439 "is_configured": false, 00:16:06.439 "data_offset": 0, 00:16:06.439 "data_size": 0 00:16:06.439 }, 00:16:06.439 { 00:16:06.439 "name": "BaseBdev4", 00:16:06.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.439 "is_configured": false, 00:16:06.439 "data_offset": 0, 00:16:06.439 "data_size": 0 00:16:06.439 } 00:16:06.439 ] 00:16:06.439 }' 00:16:06.439 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.439 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 [2024-11-18 04:05:03.470329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.007 BaseBdev3 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 [ 00:16:07.007 { 00:16:07.007 "name": "BaseBdev3", 00:16:07.007 "aliases": [ 00:16:07.007 "42cbb104-9576-4d69-ab0c-7a6d587a7fba" 00:16:07.007 ], 00:16:07.007 "product_name": "Malloc disk", 00:16:07.007 "block_size": 512, 00:16:07.007 "num_blocks": 65536, 00:16:07.007 "uuid": "42cbb104-9576-4d69-ab0c-7a6d587a7fba", 00:16:07.007 "assigned_rate_limits": { 00:16:07.007 "rw_ios_per_sec": 0, 00:16:07.007 "rw_mbytes_per_sec": 0, 00:16:07.007 "r_mbytes_per_sec": 0, 00:16:07.007 "w_mbytes_per_sec": 0 00:16:07.007 }, 00:16:07.007 "claimed": true, 00:16:07.007 "claim_type": "exclusive_write", 00:16:07.007 "zoned": false, 00:16:07.007 "supported_io_types": { 00:16:07.007 "read": true, 00:16:07.007 "write": true, 00:16:07.007 "unmap": true, 00:16:07.007 "flush": true, 00:16:07.007 "reset": true, 00:16:07.007 "nvme_admin": false, 00:16:07.007 "nvme_io": false, 00:16:07.007 "nvme_io_md": false, 00:16:07.007 "write_zeroes": true, 00:16:07.007 "zcopy": true, 00:16:07.007 "get_zone_info": false, 00:16:07.007 "zone_management": false, 00:16:07.007 "zone_append": false, 00:16:07.007 "compare": false, 00:16:07.007 "compare_and_write": false, 00:16:07.007 "abort": true, 00:16:07.007 "seek_hole": false, 00:16:07.007 "seek_data": false, 00:16:07.007 "copy": true, 00:16:07.007 "nvme_iov_md": false 00:16:07.007 }, 00:16:07.007 "memory_domains": [ 00:16:07.007 { 00:16:07.007 "dma_device_id": "system", 00:16:07.007 "dma_device_type": 1 00:16:07.007 }, 00:16:07.007 { 00:16:07.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.007 "dma_device_type": 2 00:16:07.007 } 00:16:07.007 ], 00:16:07.007 "driver_specific": {} 00:16:07.007 } 00:16:07.007 ] 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.007 "name": "Existed_Raid", 00:16:07.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.007 "strip_size_kb": 64, 00:16:07.007 "state": "configuring", 00:16:07.007 "raid_level": "raid5f", 00:16:07.007 "superblock": false, 00:16:07.007 "num_base_bdevs": 4, 00:16:07.007 "num_base_bdevs_discovered": 3, 00:16:07.007 "num_base_bdevs_operational": 4, 00:16:07.007 "base_bdevs_list": [ 00:16:07.007 { 00:16:07.007 "name": "BaseBdev1", 00:16:07.007 "uuid": "51416f74-cf3b-4747-bd50-f7c8a7c44119", 00:16:07.007 "is_configured": true, 00:16:07.007 "data_offset": 0, 00:16:07.007 "data_size": 65536 00:16:07.007 }, 00:16:07.007 { 00:16:07.007 "name": "BaseBdev2", 00:16:07.007 "uuid": "530da87e-fbc3-4035-a6a3-2064d04c7cce", 00:16:07.007 "is_configured": true, 00:16:07.007 "data_offset": 0, 00:16:07.007 "data_size": 65536 00:16:07.007 }, 00:16:07.007 { 00:16:07.007 "name": "BaseBdev3", 00:16:07.007 "uuid": "42cbb104-9576-4d69-ab0c-7a6d587a7fba", 00:16:07.007 "is_configured": true, 00:16:07.007 "data_offset": 0, 00:16:07.007 "data_size": 65536 00:16:07.007 }, 00:16:07.007 { 00:16:07.007 "name": "BaseBdev4", 00:16:07.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.007 "is_configured": false, 00:16:07.007 "data_offset": 0, 00:16:07.007 "data_size": 0 00:16:07.007 } 00:16:07.007 ] 00:16:07.007 }' 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.007 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.576 04:05:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:07.576 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.576 04:05:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.576 [2024-11-18 04:05:04.001090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:07.576 [2024-11-18 04:05:04.001227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:07.576 [2024-11-18 04:05:04.001256] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:07.576 [2024-11-18 04:05:04.001545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:07.576 [2024-11-18 04:05:04.008680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:07.576 [2024-11-18 04:05:04.008736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:07.576 [2024-11-18 04:05:04.009059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.576 BaseBdev4 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.576 [ 00:16:07.576 { 00:16:07.576 "name": "BaseBdev4", 00:16:07.576 "aliases": [ 00:16:07.576 "4e6cecfe-991e-4fb9-9070-6e801e7668c6" 00:16:07.576 ], 00:16:07.576 "product_name": "Malloc disk", 00:16:07.576 "block_size": 512, 00:16:07.576 "num_blocks": 65536, 00:16:07.576 "uuid": "4e6cecfe-991e-4fb9-9070-6e801e7668c6", 00:16:07.576 "assigned_rate_limits": { 00:16:07.576 "rw_ios_per_sec": 0, 00:16:07.576 "rw_mbytes_per_sec": 0, 00:16:07.576 "r_mbytes_per_sec": 0, 00:16:07.576 "w_mbytes_per_sec": 0 00:16:07.576 }, 00:16:07.576 "claimed": true, 00:16:07.576 "claim_type": "exclusive_write", 00:16:07.576 "zoned": false, 00:16:07.576 "supported_io_types": { 00:16:07.576 "read": true, 00:16:07.576 "write": true, 00:16:07.576 "unmap": true, 00:16:07.576 "flush": true, 00:16:07.576 "reset": true, 00:16:07.576 "nvme_admin": false, 00:16:07.576 "nvme_io": false, 00:16:07.576 "nvme_io_md": false, 00:16:07.576 "write_zeroes": true, 00:16:07.576 "zcopy": true, 00:16:07.576 "get_zone_info": false, 00:16:07.576 "zone_management": false, 00:16:07.576 "zone_append": false, 00:16:07.576 "compare": false, 00:16:07.576 "compare_and_write": false, 00:16:07.576 "abort": true, 00:16:07.576 "seek_hole": false, 00:16:07.576 "seek_data": false, 00:16:07.576 "copy": true, 00:16:07.576 "nvme_iov_md": false 00:16:07.576 }, 00:16:07.576 "memory_domains": [ 00:16:07.576 { 00:16:07.576 "dma_device_id": "system", 00:16:07.576 "dma_device_type": 1 00:16:07.576 }, 00:16:07.576 { 00:16:07.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.576 "dma_device_type": 2 00:16:07.576 } 00:16:07.576 ], 00:16:07.576 "driver_specific": {} 00:16:07.576 } 00:16:07.576 ] 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.576 "name": "Existed_Raid", 00:16:07.576 "uuid": "8ca754f7-5324-4213-8fd1-377dc66454f7", 00:16:07.576 "strip_size_kb": 64, 00:16:07.576 "state": "online", 00:16:07.576 "raid_level": "raid5f", 00:16:07.576 "superblock": false, 00:16:07.576 "num_base_bdevs": 4, 00:16:07.576 "num_base_bdevs_discovered": 4, 00:16:07.576 "num_base_bdevs_operational": 4, 00:16:07.576 "base_bdevs_list": [ 00:16:07.576 { 00:16:07.576 "name": "BaseBdev1", 00:16:07.576 "uuid": "51416f74-cf3b-4747-bd50-f7c8a7c44119", 00:16:07.576 "is_configured": true, 00:16:07.576 "data_offset": 0, 00:16:07.576 "data_size": 65536 00:16:07.576 }, 00:16:07.576 { 00:16:07.576 "name": "BaseBdev2", 00:16:07.576 "uuid": "530da87e-fbc3-4035-a6a3-2064d04c7cce", 00:16:07.576 "is_configured": true, 00:16:07.576 "data_offset": 0, 00:16:07.576 "data_size": 65536 00:16:07.576 }, 00:16:07.576 { 00:16:07.576 "name": "BaseBdev3", 00:16:07.576 "uuid": "42cbb104-9576-4d69-ab0c-7a6d587a7fba", 00:16:07.576 "is_configured": true, 00:16:07.576 "data_offset": 0, 00:16:07.576 "data_size": 65536 00:16:07.576 }, 00:16:07.576 { 00:16:07.576 "name": "BaseBdev4", 00:16:07.576 "uuid": "4e6cecfe-991e-4fb9-9070-6e801e7668c6", 00:16:07.576 "is_configured": true, 00:16:07.576 "data_offset": 0, 00:16:07.576 "data_size": 65536 00:16:07.576 } 00:16:07.576 ] 00:16:07.576 }' 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.576 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.835 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:07.835 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:07.835 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:07.835 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:07.835 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:07.835 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.095 [2024-11-18 04:05:04.484438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:08.095 "name": "Existed_Raid", 00:16:08.095 "aliases": [ 00:16:08.095 "8ca754f7-5324-4213-8fd1-377dc66454f7" 00:16:08.095 ], 00:16:08.095 "product_name": "Raid Volume", 00:16:08.095 "block_size": 512, 00:16:08.095 "num_blocks": 196608, 00:16:08.095 "uuid": "8ca754f7-5324-4213-8fd1-377dc66454f7", 00:16:08.095 "assigned_rate_limits": { 00:16:08.095 "rw_ios_per_sec": 0, 00:16:08.095 "rw_mbytes_per_sec": 0, 00:16:08.095 "r_mbytes_per_sec": 0, 00:16:08.095 "w_mbytes_per_sec": 0 00:16:08.095 }, 00:16:08.095 "claimed": false, 00:16:08.095 "zoned": false, 00:16:08.095 "supported_io_types": { 00:16:08.095 "read": true, 00:16:08.095 "write": true, 00:16:08.095 "unmap": false, 00:16:08.095 "flush": false, 00:16:08.095 "reset": true, 00:16:08.095 "nvme_admin": false, 00:16:08.095 "nvme_io": false, 00:16:08.095 "nvme_io_md": false, 00:16:08.095 "write_zeroes": true, 00:16:08.095 "zcopy": false, 00:16:08.095 "get_zone_info": false, 00:16:08.095 "zone_management": false, 00:16:08.095 "zone_append": false, 00:16:08.095 "compare": false, 00:16:08.095 "compare_and_write": false, 00:16:08.095 "abort": false, 00:16:08.095 "seek_hole": false, 00:16:08.095 "seek_data": false, 00:16:08.095 "copy": false, 00:16:08.095 "nvme_iov_md": false 00:16:08.095 }, 00:16:08.095 "driver_specific": { 00:16:08.095 "raid": { 00:16:08.095 "uuid": "8ca754f7-5324-4213-8fd1-377dc66454f7", 00:16:08.095 "strip_size_kb": 64, 00:16:08.095 "state": "online", 00:16:08.095 "raid_level": "raid5f", 00:16:08.095 "superblock": false, 00:16:08.095 "num_base_bdevs": 4, 00:16:08.095 "num_base_bdevs_discovered": 4, 00:16:08.095 "num_base_bdevs_operational": 4, 00:16:08.095 "base_bdevs_list": [ 00:16:08.095 { 00:16:08.095 "name": "BaseBdev1", 00:16:08.095 "uuid": "51416f74-cf3b-4747-bd50-f7c8a7c44119", 00:16:08.095 "is_configured": true, 00:16:08.095 "data_offset": 0, 00:16:08.095 "data_size": 65536 00:16:08.095 }, 00:16:08.095 { 00:16:08.095 "name": "BaseBdev2", 00:16:08.095 "uuid": "530da87e-fbc3-4035-a6a3-2064d04c7cce", 00:16:08.095 "is_configured": true, 00:16:08.095 "data_offset": 0, 00:16:08.095 "data_size": 65536 00:16:08.095 }, 00:16:08.095 { 00:16:08.095 "name": "BaseBdev3", 00:16:08.095 "uuid": "42cbb104-9576-4d69-ab0c-7a6d587a7fba", 00:16:08.095 "is_configured": true, 00:16:08.095 "data_offset": 0, 00:16:08.095 "data_size": 65536 00:16:08.095 }, 00:16:08.095 { 00:16:08.095 "name": "BaseBdev4", 00:16:08.095 "uuid": "4e6cecfe-991e-4fb9-9070-6e801e7668c6", 00:16:08.095 "is_configured": true, 00:16:08.095 "data_offset": 0, 00:16:08.095 "data_size": 65536 00:16:08.095 } 00:16:08.095 ] 00:16:08.095 } 00:16:08.095 } 00:16:08.095 }' 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:08.095 BaseBdev2 00:16:08.095 BaseBdev3 00:16:08.095 BaseBdev4' 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.095 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.356 [2024-11-18 04:05:04.811746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.356 "name": "Existed_Raid", 00:16:08.356 "uuid": "8ca754f7-5324-4213-8fd1-377dc66454f7", 00:16:08.356 "strip_size_kb": 64, 00:16:08.356 "state": "online", 00:16:08.356 "raid_level": "raid5f", 00:16:08.356 "superblock": false, 00:16:08.356 "num_base_bdevs": 4, 00:16:08.356 "num_base_bdevs_discovered": 3, 00:16:08.356 "num_base_bdevs_operational": 3, 00:16:08.356 "base_bdevs_list": [ 00:16:08.356 { 00:16:08.356 "name": null, 00:16:08.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.356 "is_configured": false, 00:16:08.356 "data_offset": 0, 00:16:08.356 "data_size": 65536 00:16:08.356 }, 00:16:08.356 { 00:16:08.356 "name": "BaseBdev2", 00:16:08.356 "uuid": "530da87e-fbc3-4035-a6a3-2064d04c7cce", 00:16:08.356 "is_configured": true, 00:16:08.356 "data_offset": 0, 00:16:08.356 "data_size": 65536 00:16:08.356 }, 00:16:08.356 { 00:16:08.356 "name": "BaseBdev3", 00:16:08.356 "uuid": "42cbb104-9576-4d69-ab0c-7a6d587a7fba", 00:16:08.356 "is_configured": true, 00:16:08.356 "data_offset": 0, 00:16:08.356 "data_size": 65536 00:16:08.356 }, 00:16:08.356 { 00:16:08.356 "name": "BaseBdev4", 00:16:08.356 "uuid": "4e6cecfe-991e-4fb9-9070-6e801e7668c6", 00:16:08.356 "is_configured": true, 00:16:08.356 "data_offset": 0, 00:16:08.356 "data_size": 65536 00:16:08.356 } 00:16:08.356 ] 00:16:08.356 }' 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.356 04:05:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 [2024-11-18 04:05:05.391546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.927 [2024-11-18 04:05:05.391680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.927 [2024-11-18 04:05:05.480515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.927 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.927 [2024-11-18 04:05:05.540421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.288 [2024-11-18 04:05:05.691084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:09.288 [2024-11-18 04:05:05.691188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.288 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.289 BaseBdev2 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.289 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.555 [ 00:16:09.555 { 00:16:09.555 "name": "BaseBdev2", 00:16:09.555 "aliases": [ 00:16:09.555 "f7c7d10c-095c-4113-99eb-d086d07e9bd9" 00:16:09.555 ], 00:16:09.555 "product_name": "Malloc disk", 00:16:09.555 "block_size": 512, 00:16:09.555 "num_blocks": 65536, 00:16:09.555 "uuid": "f7c7d10c-095c-4113-99eb-d086d07e9bd9", 00:16:09.555 "assigned_rate_limits": { 00:16:09.555 "rw_ios_per_sec": 0, 00:16:09.555 "rw_mbytes_per_sec": 0, 00:16:09.555 "r_mbytes_per_sec": 0, 00:16:09.555 "w_mbytes_per_sec": 0 00:16:09.555 }, 00:16:09.555 "claimed": false, 00:16:09.555 "zoned": false, 00:16:09.555 "supported_io_types": { 00:16:09.555 "read": true, 00:16:09.555 "write": true, 00:16:09.555 "unmap": true, 00:16:09.555 "flush": true, 00:16:09.555 "reset": true, 00:16:09.555 "nvme_admin": false, 00:16:09.555 "nvme_io": false, 00:16:09.555 "nvme_io_md": false, 00:16:09.555 "write_zeroes": true, 00:16:09.555 "zcopy": true, 00:16:09.555 "get_zone_info": false, 00:16:09.555 "zone_management": false, 00:16:09.555 "zone_append": false, 00:16:09.555 "compare": false, 00:16:09.555 "compare_and_write": false, 00:16:09.555 "abort": true, 00:16:09.555 "seek_hole": false, 00:16:09.555 "seek_data": false, 00:16:09.555 "copy": true, 00:16:09.555 "nvme_iov_md": false 00:16:09.555 }, 00:16:09.555 "memory_domains": [ 00:16:09.555 { 00:16:09.555 "dma_device_id": "system", 00:16:09.555 "dma_device_type": 1 00:16:09.555 }, 00:16:09.555 { 00:16:09.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.555 "dma_device_type": 2 00:16:09.555 } 00:16:09.555 ], 00:16:09.556 "driver_specific": {} 00:16:09.556 } 00:16:09.556 ] 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 BaseBdev3 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 [ 00:16:09.556 { 00:16:09.556 "name": "BaseBdev3", 00:16:09.556 "aliases": [ 00:16:09.556 "50554960-2e82-4111-9415-aeae4e64f091" 00:16:09.556 ], 00:16:09.556 "product_name": "Malloc disk", 00:16:09.556 "block_size": 512, 00:16:09.556 "num_blocks": 65536, 00:16:09.556 "uuid": "50554960-2e82-4111-9415-aeae4e64f091", 00:16:09.556 "assigned_rate_limits": { 00:16:09.556 "rw_ios_per_sec": 0, 00:16:09.556 "rw_mbytes_per_sec": 0, 00:16:09.556 "r_mbytes_per_sec": 0, 00:16:09.556 "w_mbytes_per_sec": 0 00:16:09.556 }, 00:16:09.556 "claimed": false, 00:16:09.556 "zoned": false, 00:16:09.556 "supported_io_types": { 00:16:09.556 "read": true, 00:16:09.556 "write": true, 00:16:09.556 "unmap": true, 00:16:09.556 "flush": true, 00:16:09.556 "reset": true, 00:16:09.556 "nvme_admin": false, 00:16:09.556 "nvme_io": false, 00:16:09.556 "nvme_io_md": false, 00:16:09.556 "write_zeroes": true, 00:16:09.556 "zcopy": true, 00:16:09.556 "get_zone_info": false, 00:16:09.556 "zone_management": false, 00:16:09.556 "zone_append": false, 00:16:09.556 "compare": false, 00:16:09.556 "compare_and_write": false, 00:16:09.556 "abort": true, 00:16:09.556 "seek_hole": false, 00:16:09.556 "seek_data": false, 00:16:09.556 "copy": true, 00:16:09.556 "nvme_iov_md": false 00:16:09.556 }, 00:16:09.556 "memory_domains": [ 00:16:09.556 { 00:16:09.556 "dma_device_id": "system", 00:16:09.556 "dma_device_type": 1 00:16:09.556 }, 00:16:09.556 { 00:16:09.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.556 "dma_device_type": 2 00:16:09.556 } 00:16:09.556 ], 00:16:09.556 "driver_specific": {} 00:16:09.556 } 00:16:09.556 ] 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 04:05:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 BaseBdev4 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 [ 00:16:09.556 { 00:16:09.556 "name": "BaseBdev4", 00:16:09.556 "aliases": [ 00:16:09.556 "dc552ea1-1f4f-4171-a5be-15ccee228c15" 00:16:09.556 ], 00:16:09.556 "product_name": "Malloc disk", 00:16:09.556 "block_size": 512, 00:16:09.556 "num_blocks": 65536, 00:16:09.556 "uuid": "dc552ea1-1f4f-4171-a5be-15ccee228c15", 00:16:09.556 "assigned_rate_limits": { 00:16:09.556 "rw_ios_per_sec": 0, 00:16:09.556 "rw_mbytes_per_sec": 0, 00:16:09.556 "r_mbytes_per_sec": 0, 00:16:09.556 "w_mbytes_per_sec": 0 00:16:09.556 }, 00:16:09.556 "claimed": false, 00:16:09.556 "zoned": false, 00:16:09.556 "supported_io_types": { 00:16:09.556 "read": true, 00:16:09.556 "write": true, 00:16:09.556 "unmap": true, 00:16:09.556 "flush": true, 00:16:09.556 "reset": true, 00:16:09.556 "nvme_admin": false, 00:16:09.556 "nvme_io": false, 00:16:09.556 "nvme_io_md": false, 00:16:09.556 "write_zeroes": true, 00:16:09.556 "zcopy": true, 00:16:09.556 "get_zone_info": false, 00:16:09.556 "zone_management": false, 00:16:09.556 "zone_append": false, 00:16:09.556 "compare": false, 00:16:09.556 "compare_and_write": false, 00:16:09.556 "abort": true, 00:16:09.556 "seek_hole": false, 00:16:09.556 "seek_data": false, 00:16:09.556 "copy": true, 00:16:09.556 "nvme_iov_md": false 00:16:09.556 }, 00:16:09.556 "memory_domains": [ 00:16:09.556 { 00:16:09.556 "dma_device_id": "system", 00:16:09.556 "dma_device_type": 1 00:16:09.556 }, 00:16:09.556 { 00:16:09.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.556 "dma_device_type": 2 00:16:09.556 } 00:16:09.556 ], 00:16:09.556 "driver_specific": {} 00:16:09.556 } 00:16:09.556 ] 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.556 [2024-11-18 04:05:06.072393] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.556 [2024-11-18 04:05:06.072493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.556 [2024-11-18 04:05:06.072533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.556 [2024-11-18 04:05:06.074208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.556 [2024-11-18 04:05:06.074295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.556 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.557 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.557 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.557 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.557 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.557 "name": "Existed_Raid", 00:16:09.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.557 "strip_size_kb": 64, 00:16:09.557 "state": "configuring", 00:16:09.557 "raid_level": "raid5f", 00:16:09.557 "superblock": false, 00:16:09.557 "num_base_bdevs": 4, 00:16:09.557 "num_base_bdevs_discovered": 3, 00:16:09.557 "num_base_bdevs_operational": 4, 00:16:09.557 "base_bdevs_list": [ 00:16:09.557 { 00:16:09.557 "name": "BaseBdev1", 00:16:09.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.557 "is_configured": false, 00:16:09.557 "data_offset": 0, 00:16:09.557 "data_size": 0 00:16:09.557 }, 00:16:09.557 { 00:16:09.557 "name": "BaseBdev2", 00:16:09.557 "uuid": "f7c7d10c-095c-4113-99eb-d086d07e9bd9", 00:16:09.557 "is_configured": true, 00:16:09.557 "data_offset": 0, 00:16:09.557 "data_size": 65536 00:16:09.557 }, 00:16:09.557 { 00:16:09.557 "name": "BaseBdev3", 00:16:09.557 "uuid": "50554960-2e82-4111-9415-aeae4e64f091", 00:16:09.557 "is_configured": true, 00:16:09.557 "data_offset": 0, 00:16:09.557 "data_size": 65536 00:16:09.557 }, 00:16:09.557 { 00:16:09.557 "name": "BaseBdev4", 00:16:09.557 "uuid": "dc552ea1-1f4f-4171-a5be-15ccee228c15", 00:16:09.557 "is_configured": true, 00:16:09.557 "data_offset": 0, 00:16:09.557 "data_size": 65536 00:16:09.557 } 00:16:09.557 ] 00:16:09.557 }' 00:16:09.557 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.557 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.127 [2024-11-18 04:05:06.491741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.127 "name": "Existed_Raid", 00:16:10.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.127 "strip_size_kb": 64, 00:16:10.127 "state": "configuring", 00:16:10.127 "raid_level": "raid5f", 00:16:10.127 "superblock": false, 00:16:10.127 "num_base_bdevs": 4, 00:16:10.127 "num_base_bdevs_discovered": 2, 00:16:10.127 "num_base_bdevs_operational": 4, 00:16:10.127 "base_bdevs_list": [ 00:16:10.127 { 00:16:10.127 "name": "BaseBdev1", 00:16:10.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.127 "is_configured": false, 00:16:10.127 "data_offset": 0, 00:16:10.127 "data_size": 0 00:16:10.127 }, 00:16:10.127 { 00:16:10.127 "name": null, 00:16:10.127 "uuid": "f7c7d10c-095c-4113-99eb-d086d07e9bd9", 00:16:10.127 "is_configured": false, 00:16:10.127 "data_offset": 0, 00:16:10.127 "data_size": 65536 00:16:10.127 }, 00:16:10.127 { 00:16:10.127 "name": "BaseBdev3", 00:16:10.127 "uuid": "50554960-2e82-4111-9415-aeae4e64f091", 00:16:10.127 "is_configured": true, 00:16:10.127 "data_offset": 0, 00:16:10.127 "data_size": 65536 00:16:10.127 }, 00:16:10.127 { 00:16:10.127 "name": "BaseBdev4", 00:16:10.127 "uuid": "dc552ea1-1f4f-4171-a5be-15ccee228c15", 00:16:10.127 "is_configured": true, 00:16:10.127 "data_offset": 0, 00:16:10.127 "data_size": 65536 00:16:10.127 } 00:16:10.127 ] 00:16:10.127 }' 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.127 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.388 [2024-11-18 04:05:06.982553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.388 BaseBdev1 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.388 04:05:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.388 [ 00:16:10.388 { 00:16:10.388 "name": "BaseBdev1", 00:16:10.388 "aliases": [ 00:16:10.388 "052d30fc-fdd0-4704-b774-63d6ac968c33" 00:16:10.388 ], 00:16:10.388 "product_name": "Malloc disk", 00:16:10.388 "block_size": 512, 00:16:10.388 "num_blocks": 65536, 00:16:10.388 "uuid": "052d30fc-fdd0-4704-b774-63d6ac968c33", 00:16:10.388 "assigned_rate_limits": { 00:16:10.388 "rw_ios_per_sec": 0, 00:16:10.388 "rw_mbytes_per_sec": 0, 00:16:10.388 "r_mbytes_per_sec": 0, 00:16:10.388 "w_mbytes_per_sec": 0 00:16:10.388 }, 00:16:10.388 "claimed": true, 00:16:10.388 "claim_type": "exclusive_write", 00:16:10.388 "zoned": false, 00:16:10.388 "supported_io_types": { 00:16:10.388 "read": true, 00:16:10.388 "write": true, 00:16:10.388 "unmap": true, 00:16:10.388 "flush": true, 00:16:10.388 "reset": true, 00:16:10.388 "nvme_admin": false, 00:16:10.388 "nvme_io": false, 00:16:10.388 "nvme_io_md": false, 00:16:10.388 "write_zeroes": true, 00:16:10.388 "zcopy": true, 00:16:10.388 "get_zone_info": false, 00:16:10.388 "zone_management": false, 00:16:10.388 "zone_append": false, 00:16:10.388 "compare": false, 00:16:10.388 "compare_and_write": false, 00:16:10.388 "abort": true, 00:16:10.388 "seek_hole": false, 00:16:10.388 "seek_data": false, 00:16:10.388 "copy": true, 00:16:10.388 "nvme_iov_md": false 00:16:10.388 }, 00:16:10.388 "memory_domains": [ 00:16:10.388 { 00:16:10.388 "dma_device_id": "system", 00:16:10.388 "dma_device_type": 1 00:16:10.388 }, 00:16:10.388 { 00:16:10.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.388 "dma_device_type": 2 00:16:10.388 } 00:16:10.388 ], 00:16:10.388 "driver_specific": {} 00:16:10.388 } 00:16:10.388 ] 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.388 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.649 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.649 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.649 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.649 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.649 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.649 "name": "Existed_Raid", 00:16:10.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.649 "strip_size_kb": 64, 00:16:10.649 "state": "configuring", 00:16:10.649 "raid_level": "raid5f", 00:16:10.649 "superblock": false, 00:16:10.649 "num_base_bdevs": 4, 00:16:10.649 "num_base_bdevs_discovered": 3, 00:16:10.649 "num_base_bdevs_operational": 4, 00:16:10.649 "base_bdevs_list": [ 00:16:10.649 { 00:16:10.649 "name": "BaseBdev1", 00:16:10.649 "uuid": "052d30fc-fdd0-4704-b774-63d6ac968c33", 00:16:10.649 "is_configured": true, 00:16:10.649 "data_offset": 0, 00:16:10.649 "data_size": 65536 00:16:10.649 }, 00:16:10.649 { 00:16:10.649 "name": null, 00:16:10.649 "uuid": "f7c7d10c-095c-4113-99eb-d086d07e9bd9", 00:16:10.649 "is_configured": false, 00:16:10.649 "data_offset": 0, 00:16:10.649 "data_size": 65536 00:16:10.649 }, 00:16:10.649 { 00:16:10.649 "name": "BaseBdev3", 00:16:10.649 "uuid": "50554960-2e82-4111-9415-aeae4e64f091", 00:16:10.649 "is_configured": true, 00:16:10.649 "data_offset": 0, 00:16:10.649 "data_size": 65536 00:16:10.649 }, 00:16:10.649 { 00:16:10.649 "name": "BaseBdev4", 00:16:10.649 "uuid": "dc552ea1-1f4f-4171-a5be-15ccee228c15", 00:16:10.649 "is_configured": true, 00:16:10.649 "data_offset": 0, 00:16:10.649 "data_size": 65536 00:16:10.649 } 00:16:10.649 ] 00:16:10.649 }' 00:16:10.649 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.649 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.909 [2024-11-18 04:05:07.485747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.909 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.909 "name": "Existed_Raid", 00:16:10.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.909 "strip_size_kb": 64, 00:16:10.909 "state": "configuring", 00:16:10.909 "raid_level": "raid5f", 00:16:10.909 "superblock": false, 00:16:10.909 "num_base_bdevs": 4, 00:16:10.909 "num_base_bdevs_discovered": 2, 00:16:10.909 "num_base_bdevs_operational": 4, 00:16:10.909 "base_bdevs_list": [ 00:16:10.910 { 00:16:10.910 "name": "BaseBdev1", 00:16:10.910 "uuid": "052d30fc-fdd0-4704-b774-63d6ac968c33", 00:16:10.910 "is_configured": true, 00:16:10.910 "data_offset": 0, 00:16:10.910 "data_size": 65536 00:16:10.910 }, 00:16:10.910 { 00:16:10.910 "name": null, 00:16:10.910 "uuid": "f7c7d10c-095c-4113-99eb-d086d07e9bd9", 00:16:10.910 "is_configured": false, 00:16:10.910 "data_offset": 0, 00:16:10.910 "data_size": 65536 00:16:10.910 }, 00:16:10.910 { 00:16:10.910 "name": null, 00:16:10.910 "uuid": "50554960-2e82-4111-9415-aeae4e64f091", 00:16:10.910 "is_configured": false, 00:16:10.910 "data_offset": 0, 00:16:10.910 "data_size": 65536 00:16:10.910 }, 00:16:10.910 { 00:16:10.910 "name": "BaseBdev4", 00:16:10.910 "uuid": "dc552ea1-1f4f-4171-a5be-15ccee228c15", 00:16:10.910 "is_configured": true, 00:16:10.910 "data_offset": 0, 00:16:10.910 "data_size": 65536 00:16:10.910 } 00:16:10.910 ] 00:16:10.910 }' 00:16:10.910 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.910 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.481 [2024-11-18 04:05:07.933000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.481 "name": "Existed_Raid", 00:16:11.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.481 "strip_size_kb": 64, 00:16:11.481 "state": "configuring", 00:16:11.481 "raid_level": "raid5f", 00:16:11.481 "superblock": false, 00:16:11.481 "num_base_bdevs": 4, 00:16:11.481 "num_base_bdevs_discovered": 3, 00:16:11.481 "num_base_bdevs_operational": 4, 00:16:11.481 "base_bdevs_list": [ 00:16:11.481 { 00:16:11.481 "name": "BaseBdev1", 00:16:11.481 "uuid": "052d30fc-fdd0-4704-b774-63d6ac968c33", 00:16:11.481 "is_configured": true, 00:16:11.481 "data_offset": 0, 00:16:11.481 "data_size": 65536 00:16:11.481 }, 00:16:11.481 { 00:16:11.481 "name": null, 00:16:11.481 "uuid": "f7c7d10c-095c-4113-99eb-d086d07e9bd9", 00:16:11.481 "is_configured": false, 00:16:11.481 "data_offset": 0, 00:16:11.481 "data_size": 65536 00:16:11.481 }, 00:16:11.481 { 00:16:11.481 "name": "BaseBdev3", 00:16:11.481 "uuid": "50554960-2e82-4111-9415-aeae4e64f091", 00:16:11.481 "is_configured": true, 00:16:11.481 "data_offset": 0, 00:16:11.481 "data_size": 65536 00:16:11.481 }, 00:16:11.481 { 00:16:11.481 "name": "BaseBdev4", 00:16:11.481 "uuid": "dc552ea1-1f4f-4171-a5be-15ccee228c15", 00:16:11.481 "is_configured": true, 00:16:11.481 "data_offset": 0, 00:16:11.481 "data_size": 65536 00:16:11.481 } 00:16:11.481 ] 00:16:11.481 }' 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.481 04:05:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.741 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:11.741 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.741 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.741 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.741 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.000 [2024-11-18 04:05:08.396221] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.000 "name": "Existed_Raid", 00:16:12.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.000 "strip_size_kb": 64, 00:16:12.000 "state": "configuring", 00:16:12.000 "raid_level": "raid5f", 00:16:12.000 "superblock": false, 00:16:12.000 "num_base_bdevs": 4, 00:16:12.000 "num_base_bdevs_discovered": 2, 00:16:12.000 "num_base_bdevs_operational": 4, 00:16:12.000 "base_bdevs_list": [ 00:16:12.000 { 00:16:12.000 "name": null, 00:16:12.000 "uuid": "052d30fc-fdd0-4704-b774-63d6ac968c33", 00:16:12.000 "is_configured": false, 00:16:12.000 "data_offset": 0, 00:16:12.000 "data_size": 65536 00:16:12.000 }, 00:16:12.000 { 00:16:12.000 "name": null, 00:16:12.000 "uuid": "f7c7d10c-095c-4113-99eb-d086d07e9bd9", 00:16:12.000 "is_configured": false, 00:16:12.000 "data_offset": 0, 00:16:12.000 "data_size": 65536 00:16:12.000 }, 00:16:12.000 { 00:16:12.000 "name": "BaseBdev3", 00:16:12.000 "uuid": "50554960-2e82-4111-9415-aeae4e64f091", 00:16:12.000 "is_configured": true, 00:16:12.000 "data_offset": 0, 00:16:12.000 "data_size": 65536 00:16:12.000 }, 00:16:12.000 { 00:16:12.000 "name": "BaseBdev4", 00:16:12.000 "uuid": "dc552ea1-1f4f-4171-a5be-15ccee228c15", 00:16:12.000 "is_configured": true, 00:16:12.000 "data_offset": 0, 00:16:12.000 "data_size": 65536 00:16:12.000 } 00:16:12.000 ] 00:16:12.000 }' 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.000 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.568 [2024-11-18 04:05:08.991620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.568 04:05:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.568 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.568 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.568 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.568 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.568 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.568 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.568 "name": "Existed_Raid", 00:16:12.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.568 "strip_size_kb": 64, 00:16:12.568 "state": "configuring", 00:16:12.568 "raid_level": "raid5f", 00:16:12.568 "superblock": false, 00:16:12.568 "num_base_bdevs": 4, 00:16:12.568 "num_base_bdevs_discovered": 3, 00:16:12.568 "num_base_bdevs_operational": 4, 00:16:12.568 "base_bdevs_list": [ 00:16:12.568 { 00:16:12.568 "name": null, 00:16:12.568 "uuid": "052d30fc-fdd0-4704-b774-63d6ac968c33", 00:16:12.568 "is_configured": false, 00:16:12.568 "data_offset": 0, 00:16:12.568 "data_size": 65536 00:16:12.568 }, 00:16:12.568 { 00:16:12.568 "name": "BaseBdev2", 00:16:12.568 "uuid": "f7c7d10c-095c-4113-99eb-d086d07e9bd9", 00:16:12.568 "is_configured": true, 00:16:12.568 "data_offset": 0, 00:16:12.568 "data_size": 65536 00:16:12.568 }, 00:16:12.568 { 00:16:12.568 "name": "BaseBdev3", 00:16:12.568 "uuid": "50554960-2e82-4111-9415-aeae4e64f091", 00:16:12.568 "is_configured": true, 00:16:12.568 "data_offset": 0, 00:16:12.568 "data_size": 65536 00:16:12.568 }, 00:16:12.568 { 00:16:12.568 "name": "BaseBdev4", 00:16:12.568 "uuid": "dc552ea1-1f4f-4171-a5be-15ccee228c15", 00:16:12.568 "is_configured": true, 00:16:12.568 "data_offset": 0, 00:16:12.568 "data_size": 65536 00:16:12.568 } 00:16:12.568 ] 00:16:12.568 }' 00:16:12.568 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.568 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.827 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.827 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:12.827 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.827 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 052d30fc-fdd0-4704-b774-63d6ac968c33 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.086 [2024-11-18 04:05:09.590077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:13.086 [2024-11-18 04:05:09.590209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:13.086 [2024-11-18 04:05:09.590234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:13.086 [2024-11-18 04:05:09.590525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:13.086 [2024-11-18 04:05:09.597330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:13.086 [2024-11-18 04:05:09.597385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:13.086 [2024-11-18 04:05:09.597679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.086 NewBaseBdev 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:13.086 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.087 [ 00:16:13.087 { 00:16:13.087 "name": "NewBaseBdev", 00:16:13.087 "aliases": [ 00:16:13.087 "052d30fc-fdd0-4704-b774-63d6ac968c33" 00:16:13.087 ], 00:16:13.087 "product_name": "Malloc disk", 00:16:13.087 "block_size": 512, 00:16:13.087 "num_blocks": 65536, 00:16:13.087 "uuid": "052d30fc-fdd0-4704-b774-63d6ac968c33", 00:16:13.087 "assigned_rate_limits": { 00:16:13.087 "rw_ios_per_sec": 0, 00:16:13.087 "rw_mbytes_per_sec": 0, 00:16:13.087 "r_mbytes_per_sec": 0, 00:16:13.087 "w_mbytes_per_sec": 0 00:16:13.087 }, 00:16:13.087 "claimed": true, 00:16:13.087 "claim_type": "exclusive_write", 00:16:13.087 "zoned": false, 00:16:13.087 "supported_io_types": { 00:16:13.087 "read": true, 00:16:13.087 "write": true, 00:16:13.087 "unmap": true, 00:16:13.087 "flush": true, 00:16:13.087 "reset": true, 00:16:13.087 "nvme_admin": false, 00:16:13.087 "nvme_io": false, 00:16:13.087 "nvme_io_md": false, 00:16:13.087 "write_zeroes": true, 00:16:13.087 "zcopy": true, 00:16:13.087 "get_zone_info": false, 00:16:13.087 "zone_management": false, 00:16:13.087 "zone_append": false, 00:16:13.087 "compare": false, 00:16:13.087 "compare_and_write": false, 00:16:13.087 "abort": true, 00:16:13.087 "seek_hole": false, 00:16:13.087 "seek_data": false, 00:16:13.087 "copy": true, 00:16:13.087 "nvme_iov_md": false 00:16:13.087 }, 00:16:13.087 "memory_domains": [ 00:16:13.087 { 00:16:13.087 "dma_device_id": "system", 00:16:13.087 "dma_device_type": 1 00:16:13.087 }, 00:16:13.087 { 00:16:13.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.087 "dma_device_type": 2 00:16:13.087 } 00:16:13.087 ], 00:16:13.087 "driver_specific": {} 00:16:13.087 } 00:16:13.087 ] 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.087 "name": "Existed_Raid", 00:16:13.087 "uuid": "dc113e7c-ced0-4b77-b90a-58c6cc3ebb78", 00:16:13.087 "strip_size_kb": 64, 00:16:13.087 "state": "online", 00:16:13.087 "raid_level": "raid5f", 00:16:13.087 "superblock": false, 00:16:13.087 "num_base_bdevs": 4, 00:16:13.087 "num_base_bdevs_discovered": 4, 00:16:13.087 "num_base_bdevs_operational": 4, 00:16:13.087 "base_bdevs_list": [ 00:16:13.087 { 00:16:13.087 "name": "NewBaseBdev", 00:16:13.087 "uuid": "052d30fc-fdd0-4704-b774-63d6ac968c33", 00:16:13.087 "is_configured": true, 00:16:13.087 "data_offset": 0, 00:16:13.087 "data_size": 65536 00:16:13.087 }, 00:16:13.087 { 00:16:13.087 "name": "BaseBdev2", 00:16:13.087 "uuid": "f7c7d10c-095c-4113-99eb-d086d07e9bd9", 00:16:13.087 "is_configured": true, 00:16:13.087 "data_offset": 0, 00:16:13.087 "data_size": 65536 00:16:13.087 }, 00:16:13.087 { 00:16:13.087 "name": "BaseBdev3", 00:16:13.087 "uuid": "50554960-2e82-4111-9415-aeae4e64f091", 00:16:13.087 "is_configured": true, 00:16:13.087 "data_offset": 0, 00:16:13.087 "data_size": 65536 00:16:13.087 }, 00:16:13.087 { 00:16:13.087 "name": "BaseBdev4", 00:16:13.087 "uuid": "dc552ea1-1f4f-4171-a5be-15ccee228c15", 00:16:13.087 "is_configured": true, 00:16:13.087 "data_offset": 0, 00:16:13.087 "data_size": 65536 00:16:13.087 } 00:16:13.087 ] 00:16:13.087 }' 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.087 04:05:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:13.655 [2024-11-18 04:05:10.097147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:13.655 "name": "Existed_Raid", 00:16:13.655 "aliases": [ 00:16:13.655 "dc113e7c-ced0-4b77-b90a-58c6cc3ebb78" 00:16:13.655 ], 00:16:13.655 "product_name": "Raid Volume", 00:16:13.655 "block_size": 512, 00:16:13.655 "num_blocks": 196608, 00:16:13.655 "uuid": "dc113e7c-ced0-4b77-b90a-58c6cc3ebb78", 00:16:13.655 "assigned_rate_limits": { 00:16:13.655 "rw_ios_per_sec": 0, 00:16:13.655 "rw_mbytes_per_sec": 0, 00:16:13.655 "r_mbytes_per_sec": 0, 00:16:13.655 "w_mbytes_per_sec": 0 00:16:13.655 }, 00:16:13.655 "claimed": false, 00:16:13.655 "zoned": false, 00:16:13.655 "supported_io_types": { 00:16:13.655 "read": true, 00:16:13.655 "write": true, 00:16:13.655 "unmap": false, 00:16:13.655 "flush": false, 00:16:13.655 "reset": true, 00:16:13.655 "nvme_admin": false, 00:16:13.655 "nvme_io": false, 00:16:13.655 "nvme_io_md": false, 00:16:13.655 "write_zeroes": true, 00:16:13.655 "zcopy": false, 00:16:13.655 "get_zone_info": false, 00:16:13.655 "zone_management": false, 00:16:13.655 "zone_append": false, 00:16:13.655 "compare": false, 00:16:13.655 "compare_and_write": false, 00:16:13.655 "abort": false, 00:16:13.655 "seek_hole": false, 00:16:13.655 "seek_data": false, 00:16:13.655 "copy": false, 00:16:13.655 "nvme_iov_md": false 00:16:13.655 }, 00:16:13.655 "driver_specific": { 00:16:13.655 "raid": { 00:16:13.655 "uuid": "dc113e7c-ced0-4b77-b90a-58c6cc3ebb78", 00:16:13.655 "strip_size_kb": 64, 00:16:13.655 "state": "online", 00:16:13.655 "raid_level": "raid5f", 00:16:13.655 "superblock": false, 00:16:13.655 "num_base_bdevs": 4, 00:16:13.655 "num_base_bdevs_discovered": 4, 00:16:13.655 "num_base_bdevs_operational": 4, 00:16:13.655 "base_bdevs_list": [ 00:16:13.655 { 00:16:13.655 "name": "NewBaseBdev", 00:16:13.655 "uuid": "052d30fc-fdd0-4704-b774-63d6ac968c33", 00:16:13.655 "is_configured": true, 00:16:13.655 "data_offset": 0, 00:16:13.655 "data_size": 65536 00:16:13.655 }, 00:16:13.655 { 00:16:13.655 "name": "BaseBdev2", 00:16:13.655 "uuid": "f7c7d10c-095c-4113-99eb-d086d07e9bd9", 00:16:13.655 "is_configured": true, 00:16:13.655 "data_offset": 0, 00:16:13.655 "data_size": 65536 00:16:13.655 }, 00:16:13.655 { 00:16:13.655 "name": "BaseBdev3", 00:16:13.655 "uuid": "50554960-2e82-4111-9415-aeae4e64f091", 00:16:13.655 "is_configured": true, 00:16:13.655 "data_offset": 0, 00:16:13.655 "data_size": 65536 00:16:13.655 }, 00:16:13.655 { 00:16:13.655 "name": "BaseBdev4", 00:16:13.655 "uuid": "dc552ea1-1f4f-4171-a5be-15ccee228c15", 00:16:13.655 "is_configured": true, 00:16:13.655 "data_offset": 0, 00:16:13.655 "data_size": 65536 00:16:13.655 } 00:16:13.655 ] 00:16:13.655 } 00:16:13.655 } 00:16:13.655 }' 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:13.655 BaseBdev2 00:16:13.655 BaseBdev3 00:16:13.655 BaseBdev4' 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.655 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.656 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:13.656 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.656 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.656 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.916 [2024-11-18 04:05:10.416412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.916 [2024-11-18 04:05:10.416475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.916 [2024-11-18 04:05:10.416568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.916 [2024-11-18 04:05:10.416859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.916 [2024-11-18 04:05:10.416871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82660 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82660 ']' 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82660 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82660 00:16:13.916 killing process with pid 82660 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82660' 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82660 00:16:13.916 [2024-11-18 04:05:10.467227] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.916 04:05:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82660 00:16:14.486 [2024-11-18 04:05:10.829415] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.426 04:05:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:15.426 00:16:15.426 real 0m11.287s 00:16:15.426 user 0m18.021s 00:16:15.426 sys 0m2.052s 00:16:15.426 04:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.426 04:05:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.426 ************************************ 00:16:15.426 END TEST raid5f_state_function_test 00:16:15.426 ************************************ 00:16:15.426 04:05:11 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:15.426 04:05:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:15.426 04:05:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.426 04:05:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.426 ************************************ 00:16:15.426 START TEST raid5f_state_function_test_sb 00:16:15.426 ************************************ 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83328 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83328' 00:16:15.427 Process raid pid: 83328 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83328 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83328 ']' 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.427 04:05:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.427 [2024-11-18 04:05:12.049787] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:15.427 [2024-11-18 04:05:12.049996] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.687 [2024-11-18 04:05:12.229873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.947 [2024-11-18 04:05:12.336283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.947 [2024-11-18 04:05:12.528540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.947 [2024-11-18 04:05:12.528630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.517 [2024-11-18 04:05:12.854284] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.517 [2024-11-18 04:05:12.854386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.517 [2024-11-18 04:05:12.854419] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.517 [2024-11-18 04:05:12.854442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.517 [2024-11-18 04:05:12.854459] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:16.517 [2024-11-18 04:05:12.854478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:16.517 [2024-11-18 04:05:12.854510] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:16.517 [2024-11-18 04:05:12.854530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.517 "name": "Existed_Raid", 00:16:16.517 "uuid": "a60abd7a-2a5f-4d8c-b43a-b69aa048bc00", 00:16:16.517 "strip_size_kb": 64, 00:16:16.517 "state": "configuring", 00:16:16.517 "raid_level": "raid5f", 00:16:16.517 "superblock": true, 00:16:16.517 "num_base_bdevs": 4, 00:16:16.517 "num_base_bdevs_discovered": 0, 00:16:16.517 "num_base_bdevs_operational": 4, 00:16:16.517 "base_bdevs_list": [ 00:16:16.517 { 00:16:16.517 "name": "BaseBdev1", 00:16:16.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.517 "is_configured": false, 00:16:16.517 "data_offset": 0, 00:16:16.517 "data_size": 0 00:16:16.517 }, 00:16:16.517 { 00:16:16.517 "name": "BaseBdev2", 00:16:16.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.517 "is_configured": false, 00:16:16.517 "data_offset": 0, 00:16:16.517 "data_size": 0 00:16:16.517 }, 00:16:16.517 { 00:16:16.517 "name": "BaseBdev3", 00:16:16.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.517 "is_configured": false, 00:16:16.517 "data_offset": 0, 00:16:16.517 "data_size": 0 00:16:16.517 }, 00:16:16.517 { 00:16:16.517 "name": "BaseBdev4", 00:16:16.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.517 "is_configured": false, 00:16:16.517 "data_offset": 0, 00:16:16.517 "data_size": 0 00:16:16.517 } 00:16:16.517 ] 00:16:16.517 }' 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.517 04:05:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.778 [2024-11-18 04:05:13.337466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:16.778 [2024-11-18 04:05:13.337537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.778 [2024-11-18 04:05:13.349453] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.778 [2024-11-18 04:05:13.349541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.778 [2024-11-18 04:05:13.349566] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:16.778 [2024-11-18 04:05:13.349587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:16.778 [2024-11-18 04:05:13.349603] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:16.778 [2024-11-18 04:05:13.349623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:16.778 [2024-11-18 04:05:13.349639] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:16.778 [2024-11-18 04:05:13.349675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.778 [2024-11-18 04:05:13.397539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.778 BaseBdev1 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.778 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.039 [ 00:16:17.039 { 00:16:17.039 "name": "BaseBdev1", 00:16:17.039 "aliases": [ 00:16:17.039 "53281f34-e354-4d9a-97a6-940ead163db5" 00:16:17.039 ], 00:16:17.039 "product_name": "Malloc disk", 00:16:17.039 "block_size": 512, 00:16:17.039 "num_blocks": 65536, 00:16:17.039 "uuid": "53281f34-e354-4d9a-97a6-940ead163db5", 00:16:17.039 "assigned_rate_limits": { 00:16:17.039 "rw_ios_per_sec": 0, 00:16:17.039 "rw_mbytes_per_sec": 0, 00:16:17.039 "r_mbytes_per_sec": 0, 00:16:17.039 "w_mbytes_per_sec": 0 00:16:17.039 }, 00:16:17.039 "claimed": true, 00:16:17.039 "claim_type": "exclusive_write", 00:16:17.039 "zoned": false, 00:16:17.039 "supported_io_types": { 00:16:17.039 "read": true, 00:16:17.039 "write": true, 00:16:17.039 "unmap": true, 00:16:17.039 "flush": true, 00:16:17.039 "reset": true, 00:16:17.039 "nvme_admin": false, 00:16:17.039 "nvme_io": false, 00:16:17.039 "nvme_io_md": false, 00:16:17.039 "write_zeroes": true, 00:16:17.039 "zcopy": true, 00:16:17.039 "get_zone_info": false, 00:16:17.039 "zone_management": false, 00:16:17.039 "zone_append": false, 00:16:17.039 "compare": false, 00:16:17.039 "compare_and_write": false, 00:16:17.039 "abort": true, 00:16:17.039 "seek_hole": false, 00:16:17.039 "seek_data": false, 00:16:17.039 "copy": true, 00:16:17.039 "nvme_iov_md": false 00:16:17.039 }, 00:16:17.039 "memory_domains": [ 00:16:17.039 { 00:16:17.039 "dma_device_id": "system", 00:16:17.039 "dma_device_type": 1 00:16:17.039 }, 00:16:17.039 { 00:16:17.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.039 "dma_device_type": 2 00:16:17.039 } 00:16:17.039 ], 00:16:17.039 "driver_specific": {} 00:16:17.039 } 00:16:17.039 ] 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.039 "name": "Existed_Raid", 00:16:17.039 "uuid": "60c03a86-4c81-4b80-a40d-28050c9c5fb8", 00:16:17.039 "strip_size_kb": 64, 00:16:17.039 "state": "configuring", 00:16:17.039 "raid_level": "raid5f", 00:16:17.039 "superblock": true, 00:16:17.039 "num_base_bdevs": 4, 00:16:17.039 "num_base_bdevs_discovered": 1, 00:16:17.039 "num_base_bdevs_operational": 4, 00:16:17.039 "base_bdevs_list": [ 00:16:17.039 { 00:16:17.039 "name": "BaseBdev1", 00:16:17.039 "uuid": "53281f34-e354-4d9a-97a6-940ead163db5", 00:16:17.039 "is_configured": true, 00:16:17.039 "data_offset": 2048, 00:16:17.039 "data_size": 63488 00:16:17.039 }, 00:16:17.039 { 00:16:17.039 "name": "BaseBdev2", 00:16:17.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.039 "is_configured": false, 00:16:17.039 "data_offset": 0, 00:16:17.039 "data_size": 0 00:16:17.039 }, 00:16:17.039 { 00:16:17.039 "name": "BaseBdev3", 00:16:17.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.039 "is_configured": false, 00:16:17.039 "data_offset": 0, 00:16:17.039 "data_size": 0 00:16:17.039 }, 00:16:17.039 { 00:16:17.039 "name": "BaseBdev4", 00:16:17.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.039 "is_configured": false, 00:16:17.039 "data_offset": 0, 00:16:17.039 "data_size": 0 00:16:17.039 } 00:16:17.039 ] 00:16:17.039 }' 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.039 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.298 [2024-11-18 04:05:13.904701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.298 [2024-11-18 04:05:13.904793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.298 [2024-11-18 04:05:13.916737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.298 [2024-11-18 04:05:13.918474] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:17.298 [2024-11-18 04:05:13.918510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:17.298 [2024-11-18 04:05:13.918520] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:17.298 [2024-11-18 04:05:13.918529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:17.298 [2024-11-18 04:05:13.918535] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:17.298 [2024-11-18 04:05:13.918543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.298 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.299 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.299 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.299 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.299 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.299 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.299 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.299 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.299 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.299 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.299 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.299 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.559 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.559 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.559 "name": "Existed_Raid", 00:16:17.559 "uuid": "38999ee0-d3cd-4579-9639-bbb853f9d095", 00:16:17.559 "strip_size_kb": 64, 00:16:17.559 "state": "configuring", 00:16:17.559 "raid_level": "raid5f", 00:16:17.559 "superblock": true, 00:16:17.559 "num_base_bdevs": 4, 00:16:17.559 "num_base_bdevs_discovered": 1, 00:16:17.559 "num_base_bdevs_operational": 4, 00:16:17.559 "base_bdevs_list": [ 00:16:17.559 { 00:16:17.559 "name": "BaseBdev1", 00:16:17.559 "uuid": "53281f34-e354-4d9a-97a6-940ead163db5", 00:16:17.559 "is_configured": true, 00:16:17.559 "data_offset": 2048, 00:16:17.559 "data_size": 63488 00:16:17.559 }, 00:16:17.559 { 00:16:17.559 "name": "BaseBdev2", 00:16:17.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.559 "is_configured": false, 00:16:17.559 "data_offset": 0, 00:16:17.559 "data_size": 0 00:16:17.559 }, 00:16:17.559 { 00:16:17.559 "name": "BaseBdev3", 00:16:17.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.559 "is_configured": false, 00:16:17.559 "data_offset": 0, 00:16:17.559 "data_size": 0 00:16:17.559 }, 00:16:17.559 { 00:16:17.559 "name": "BaseBdev4", 00:16:17.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.559 "is_configured": false, 00:16:17.559 "data_offset": 0, 00:16:17.559 "data_size": 0 00:16:17.559 } 00:16:17.559 ] 00:16:17.559 }' 00:16:17.559 04:05:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.559 04:05:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.819 [2024-11-18 04:05:14.404136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.819 BaseBdev2 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.819 [ 00:16:17.819 { 00:16:17.819 "name": "BaseBdev2", 00:16:17.819 "aliases": [ 00:16:17.819 "5e89d0d3-a495-4970-b557-a1cecffe50b4" 00:16:17.819 ], 00:16:17.819 "product_name": "Malloc disk", 00:16:17.819 "block_size": 512, 00:16:17.819 "num_blocks": 65536, 00:16:17.819 "uuid": "5e89d0d3-a495-4970-b557-a1cecffe50b4", 00:16:17.819 "assigned_rate_limits": { 00:16:17.819 "rw_ios_per_sec": 0, 00:16:17.819 "rw_mbytes_per_sec": 0, 00:16:17.819 "r_mbytes_per_sec": 0, 00:16:17.819 "w_mbytes_per_sec": 0 00:16:17.819 }, 00:16:17.819 "claimed": true, 00:16:17.819 "claim_type": "exclusive_write", 00:16:17.819 "zoned": false, 00:16:17.819 "supported_io_types": { 00:16:17.819 "read": true, 00:16:17.819 "write": true, 00:16:17.819 "unmap": true, 00:16:17.819 "flush": true, 00:16:17.819 "reset": true, 00:16:17.819 "nvme_admin": false, 00:16:17.819 "nvme_io": false, 00:16:17.819 "nvme_io_md": false, 00:16:17.819 "write_zeroes": true, 00:16:17.819 "zcopy": true, 00:16:17.819 "get_zone_info": false, 00:16:17.819 "zone_management": false, 00:16:17.819 "zone_append": false, 00:16:17.819 "compare": false, 00:16:17.819 "compare_and_write": false, 00:16:17.819 "abort": true, 00:16:17.819 "seek_hole": false, 00:16:17.819 "seek_data": false, 00:16:17.819 "copy": true, 00:16:17.819 "nvme_iov_md": false 00:16:17.819 }, 00:16:17.819 "memory_domains": [ 00:16:17.819 { 00:16:17.819 "dma_device_id": "system", 00:16:17.819 "dma_device_type": 1 00:16:17.819 }, 00:16:17.819 { 00:16:17.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.819 "dma_device_type": 2 00:16:17.819 } 00:16:17.819 ], 00:16:17.819 "driver_specific": {} 00:16:17.819 } 00:16:17.819 ] 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:17.819 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.820 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.079 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.079 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.079 "name": "Existed_Raid", 00:16:18.079 "uuid": "38999ee0-d3cd-4579-9639-bbb853f9d095", 00:16:18.079 "strip_size_kb": 64, 00:16:18.079 "state": "configuring", 00:16:18.079 "raid_level": "raid5f", 00:16:18.079 "superblock": true, 00:16:18.079 "num_base_bdevs": 4, 00:16:18.079 "num_base_bdevs_discovered": 2, 00:16:18.079 "num_base_bdevs_operational": 4, 00:16:18.079 "base_bdevs_list": [ 00:16:18.079 { 00:16:18.079 "name": "BaseBdev1", 00:16:18.079 "uuid": "53281f34-e354-4d9a-97a6-940ead163db5", 00:16:18.079 "is_configured": true, 00:16:18.079 "data_offset": 2048, 00:16:18.079 "data_size": 63488 00:16:18.079 }, 00:16:18.079 { 00:16:18.079 "name": "BaseBdev2", 00:16:18.079 "uuid": "5e89d0d3-a495-4970-b557-a1cecffe50b4", 00:16:18.079 "is_configured": true, 00:16:18.079 "data_offset": 2048, 00:16:18.079 "data_size": 63488 00:16:18.079 }, 00:16:18.079 { 00:16:18.079 "name": "BaseBdev3", 00:16:18.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.079 "is_configured": false, 00:16:18.079 "data_offset": 0, 00:16:18.079 "data_size": 0 00:16:18.079 }, 00:16:18.079 { 00:16:18.079 "name": "BaseBdev4", 00:16:18.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.079 "is_configured": false, 00:16:18.079 "data_offset": 0, 00:16:18.079 "data_size": 0 00:16:18.079 } 00:16:18.079 ] 00:16:18.079 }' 00:16:18.079 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.079 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.339 [2024-11-18 04:05:14.948526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.339 BaseBdev3 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.339 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.339 [ 00:16:18.339 { 00:16:18.339 "name": "BaseBdev3", 00:16:18.339 "aliases": [ 00:16:18.339 "30661f76-9cb8-4a38-a794-826975643a1b" 00:16:18.339 ], 00:16:18.339 "product_name": "Malloc disk", 00:16:18.339 "block_size": 512, 00:16:18.339 "num_blocks": 65536, 00:16:18.339 "uuid": "30661f76-9cb8-4a38-a794-826975643a1b", 00:16:18.339 "assigned_rate_limits": { 00:16:18.339 "rw_ios_per_sec": 0, 00:16:18.339 "rw_mbytes_per_sec": 0, 00:16:18.339 "r_mbytes_per_sec": 0, 00:16:18.339 "w_mbytes_per_sec": 0 00:16:18.339 }, 00:16:18.339 "claimed": true, 00:16:18.339 "claim_type": "exclusive_write", 00:16:18.339 "zoned": false, 00:16:18.339 "supported_io_types": { 00:16:18.339 "read": true, 00:16:18.339 "write": true, 00:16:18.339 "unmap": true, 00:16:18.339 "flush": true, 00:16:18.598 "reset": true, 00:16:18.598 "nvme_admin": false, 00:16:18.598 "nvme_io": false, 00:16:18.598 "nvme_io_md": false, 00:16:18.598 "write_zeroes": true, 00:16:18.598 "zcopy": true, 00:16:18.598 "get_zone_info": false, 00:16:18.598 "zone_management": false, 00:16:18.598 "zone_append": false, 00:16:18.598 "compare": false, 00:16:18.598 "compare_and_write": false, 00:16:18.598 "abort": true, 00:16:18.598 "seek_hole": false, 00:16:18.598 "seek_data": false, 00:16:18.598 "copy": true, 00:16:18.598 "nvme_iov_md": false 00:16:18.598 }, 00:16:18.598 "memory_domains": [ 00:16:18.598 { 00:16:18.598 "dma_device_id": "system", 00:16:18.598 "dma_device_type": 1 00:16:18.598 }, 00:16:18.598 { 00:16:18.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.598 "dma_device_type": 2 00:16:18.598 } 00:16:18.598 ], 00:16:18.598 "driver_specific": {} 00:16:18.598 } 00:16:18.598 ] 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.598 04:05:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.598 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.598 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.598 "name": "Existed_Raid", 00:16:18.598 "uuid": "38999ee0-d3cd-4579-9639-bbb853f9d095", 00:16:18.598 "strip_size_kb": 64, 00:16:18.598 "state": "configuring", 00:16:18.598 "raid_level": "raid5f", 00:16:18.598 "superblock": true, 00:16:18.598 "num_base_bdevs": 4, 00:16:18.598 "num_base_bdevs_discovered": 3, 00:16:18.598 "num_base_bdevs_operational": 4, 00:16:18.598 "base_bdevs_list": [ 00:16:18.598 { 00:16:18.598 "name": "BaseBdev1", 00:16:18.598 "uuid": "53281f34-e354-4d9a-97a6-940ead163db5", 00:16:18.598 "is_configured": true, 00:16:18.599 "data_offset": 2048, 00:16:18.599 "data_size": 63488 00:16:18.599 }, 00:16:18.599 { 00:16:18.599 "name": "BaseBdev2", 00:16:18.599 "uuid": "5e89d0d3-a495-4970-b557-a1cecffe50b4", 00:16:18.599 "is_configured": true, 00:16:18.599 "data_offset": 2048, 00:16:18.599 "data_size": 63488 00:16:18.599 }, 00:16:18.599 { 00:16:18.599 "name": "BaseBdev3", 00:16:18.599 "uuid": "30661f76-9cb8-4a38-a794-826975643a1b", 00:16:18.599 "is_configured": true, 00:16:18.599 "data_offset": 2048, 00:16:18.599 "data_size": 63488 00:16:18.599 }, 00:16:18.599 { 00:16:18.599 "name": "BaseBdev4", 00:16:18.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.599 "is_configured": false, 00:16:18.599 "data_offset": 0, 00:16:18.599 "data_size": 0 00:16:18.599 } 00:16:18.599 ] 00:16:18.599 }' 00:16:18.599 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.599 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.858 [2024-11-18 04:05:15.453980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:18.858 [2024-11-18 04:05:15.454318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:18.858 [2024-11-18 04:05:15.454373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:18.858 [2024-11-18 04:05:15.454644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:18.858 BaseBdev4 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.858 [2024-11-18 04:05:15.462023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:18.858 [2024-11-18 04:05:15.462084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:18.858 [2024-11-18 04:05:15.462353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.858 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.858 [ 00:16:18.858 { 00:16:18.858 "name": "BaseBdev4", 00:16:18.858 "aliases": [ 00:16:18.858 "fbbcb607-a4c5-43f4-a5c6-fcb6d8e5be4b" 00:16:18.858 ], 00:16:18.858 "product_name": "Malloc disk", 00:16:18.858 "block_size": 512, 00:16:18.858 "num_blocks": 65536, 00:16:18.858 "uuid": "fbbcb607-a4c5-43f4-a5c6-fcb6d8e5be4b", 00:16:18.858 "assigned_rate_limits": { 00:16:18.858 "rw_ios_per_sec": 0, 00:16:18.858 "rw_mbytes_per_sec": 0, 00:16:18.858 "r_mbytes_per_sec": 0, 00:16:18.858 "w_mbytes_per_sec": 0 00:16:18.858 }, 00:16:18.858 "claimed": true, 00:16:18.858 "claim_type": "exclusive_write", 00:16:18.858 "zoned": false, 00:16:18.858 "supported_io_types": { 00:16:18.859 "read": true, 00:16:18.859 "write": true, 00:16:18.859 "unmap": true, 00:16:18.859 "flush": true, 00:16:18.859 "reset": true, 00:16:18.859 "nvme_admin": false, 00:16:18.859 "nvme_io": false, 00:16:18.859 "nvme_io_md": false, 00:16:18.859 "write_zeroes": true, 00:16:18.859 "zcopy": true, 00:16:18.859 "get_zone_info": false, 00:16:18.859 "zone_management": false, 00:16:18.859 "zone_append": false, 00:16:18.859 "compare": false, 00:16:18.859 "compare_and_write": false, 00:16:18.859 "abort": true, 00:16:18.859 "seek_hole": false, 00:16:18.859 "seek_data": false, 00:16:18.859 "copy": true, 00:16:18.859 "nvme_iov_md": false 00:16:18.859 }, 00:16:18.859 "memory_domains": [ 00:16:18.859 { 00:16:18.859 "dma_device_id": "system", 00:16:18.859 "dma_device_type": 1 00:16:18.859 }, 00:16:18.859 { 00:16:18.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.859 "dma_device_type": 2 00:16:18.859 } 00:16:18.859 ], 00:16:18.859 "driver_specific": {} 00:16:18.859 } 00:16:18.859 ] 00:16:18.859 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.859 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:18.859 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:18.859 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:18.859 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:18.859 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.859 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.859 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.859 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.859 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.117 "name": "Existed_Raid", 00:16:19.117 "uuid": "38999ee0-d3cd-4579-9639-bbb853f9d095", 00:16:19.117 "strip_size_kb": 64, 00:16:19.117 "state": "online", 00:16:19.117 "raid_level": "raid5f", 00:16:19.117 "superblock": true, 00:16:19.117 "num_base_bdevs": 4, 00:16:19.117 "num_base_bdevs_discovered": 4, 00:16:19.117 "num_base_bdevs_operational": 4, 00:16:19.117 "base_bdevs_list": [ 00:16:19.117 { 00:16:19.117 "name": "BaseBdev1", 00:16:19.117 "uuid": "53281f34-e354-4d9a-97a6-940ead163db5", 00:16:19.117 "is_configured": true, 00:16:19.117 "data_offset": 2048, 00:16:19.117 "data_size": 63488 00:16:19.117 }, 00:16:19.117 { 00:16:19.117 "name": "BaseBdev2", 00:16:19.117 "uuid": "5e89d0d3-a495-4970-b557-a1cecffe50b4", 00:16:19.117 "is_configured": true, 00:16:19.117 "data_offset": 2048, 00:16:19.117 "data_size": 63488 00:16:19.117 }, 00:16:19.117 { 00:16:19.117 "name": "BaseBdev3", 00:16:19.117 "uuid": "30661f76-9cb8-4a38-a794-826975643a1b", 00:16:19.117 "is_configured": true, 00:16:19.117 "data_offset": 2048, 00:16:19.117 "data_size": 63488 00:16:19.117 }, 00:16:19.117 { 00:16:19.117 "name": "BaseBdev4", 00:16:19.117 "uuid": "fbbcb607-a4c5-43f4-a5c6-fcb6d8e5be4b", 00:16:19.117 "is_configured": true, 00:16:19.117 "data_offset": 2048, 00:16:19.117 "data_size": 63488 00:16:19.117 } 00:16:19.117 ] 00:16:19.117 }' 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.117 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.375 [2024-11-18 04:05:15.941890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.375 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:19.375 "name": "Existed_Raid", 00:16:19.375 "aliases": [ 00:16:19.375 "38999ee0-d3cd-4579-9639-bbb853f9d095" 00:16:19.375 ], 00:16:19.375 "product_name": "Raid Volume", 00:16:19.375 "block_size": 512, 00:16:19.375 "num_blocks": 190464, 00:16:19.375 "uuid": "38999ee0-d3cd-4579-9639-bbb853f9d095", 00:16:19.375 "assigned_rate_limits": { 00:16:19.375 "rw_ios_per_sec": 0, 00:16:19.375 "rw_mbytes_per_sec": 0, 00:16:19.375 "r_mbytes_per_sec": 0, 00:16:19.375 "w_mbytes_per_sec": 0 00:16:19.375 }, 00:16:19.375 "claimed": false, 00:16:19.375 "zoned": false, 00:16:19.375 "supported_io_types": { 00:16:19.375 "read": true, 00:16:19.375 "write": true, 00:16:19.375 "unmap": false, 00:16:19.375 "flush": false, 00:16:19.375 "reset": true, 00:16:19.375 "nvme_admin": false, 00:16:19.375 "nvme_io": false, 00:16:19.375 "nvme_io_md": false, 00:16:19.375 "write_zeroes": true, 00:16:19.375 "zcopy": false, 00:16:19.375 "get_zone_info": false, 00:16:19.375 "zone_management": false, 00:16:19.375 "zone_append": false, 00:16:19.375 "compare": false, 00:16:19.375 "compare_and_write": false, 00:16:19.375 "abort": false, 00:16:19.375 "seek_hole": false, 00:16:19.375 "seek_data": false, 00:16:19.375 "copy": false, 00:16:19.375 "nvme_iov_md": false 00:16:19.375 }, 00:16:19.375 "driver_specific": { 00:16:19.375 "raid": { 00:16:19.375 "uuid": "38999ee0-d3cd-4579-9639-bbb853f9d095", 00:16:19.375 "strip_size_kb": 64, 00:16:19.375 "state": "online", 00:16:19.375 "raid_level": "raid5f", 00:16:19.375 "superblock": true, 00:16:19.375 "num_base_bdevs": 4, 00:16:19.375 "num_base_bdevs_discovered": 4, 00:16:19.375 "num_base_bdevs_operational": 4, 00:16:19.376 "base_bdevs_list": [ 00:16:19.376 { 00:16:19.376 "name": "BaseBdev1", 00:16:19.376 "uuid": "53281f34-e354-4d9a-97a6-940ead163db5", 00:16:19.376 "is_configured": true, 00:16:19.376 "data_offset": 2048, 00:16:19.376 "data_size": 63488 00:16:19.376 }, 00:16:19.376 { 00:16:19.376 "name": "BaseBdev2", 00:16:19.376 "uuid": "5e89d0d3-a495-4970-b557-a1cecffe50b4", 00:16:19.376 "is_configured": true, 00:16:19.376 "data_offset": 2048, 00:16:19.376 "data_size": 63488 00:16:19.376 }, 00:16:19.376 { 00:16:19.376 "name": "BaseBdev3", 00:16:19.376 "uuid": "30661f76-9cb8-4a38-a794-826975643a1b", 00:16:19.376 "is_configured": true, 00:16:19.376 "data_offset": 2048, 00:16:19.376 "data_size": 63488 00:16:19.376 }, 00:16:19.376 { 00:16:19.376 "name": "BaseBdev4", 00:16:19.376 "uuid": "fbbcb607-a4c5-43f4-a5c6-fcb6d8e5be4b", 00:16:19.376 "is_configured": true, 00:16:19.376 "data_offset": 2048, 00:16:19.376 "data_size": 63488 00:16:19.376 } 00:16:19.376 ] 00:16:19.376 } 00:16:19.376 } 00:16:19.376 }' 00:16:19.376 04:05:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.376 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:19.376 BaseBdev2 00:16:19.376 BaseBdev3 00:16:19.376 BaseBdev4' 00:16:19.376 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.634 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.634 [2024-11-18 04:05:16.265178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.894 "name": "Existed_Raid", 00:16:19.894 "uuid": "38999ee0-d3cd-4579-9639-bbb853f9d095", 00:16:19.894 "strip_size_kb": 64, 00:16:19.894 "state": "online", 00:16:19.894 "raid_level": "raid5f", 00:16:19.894 "superblock": true, 00:16:19.894 "num_base_bdevs": 4, 00:16:19.894 "num_base_bdevs_discovered": 3, 00:16:19.894 "num_base_bdevs_operational": 3, 00:16:19.894 "base_bdevs_list": [ 00:16:19.894 { 00:16:19.894 "name": null, 00:16:19.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.894 "is_configured": false, 00:16:19.894 "data_offset": 0, 00:16:19.894 "data_size": 63488 00:16:19.894 }, 00:16:19.894 { 00:16:19.894 "name": "BaseBdev2", 00:16:19.894 "uuid": "5e89d0d3-a495-4970-b557-a1cecffe50b4", 00:16:19.894 "is_configured": true, 00:16:19.894 "data_offset": 2048, 00:16:19.894 "data_size": 63488 00:16:19.894 }, 00:16:19.894 { 00:16:19.894 "name": "BaseBdev3", 00:16:19.894 "uuid": "30661f76-9cb8-4a38-a794-826975643a1b", 00:16:19.894 "is_configured": true, 00:16:19.894 "data_offset": 2048, 00:16:19.894 "data_size": 63488 00:16:19.894 }, 00:16:19.894 { 00:16:19.894 "name": "BaseBdev4", 00:16:19.894 "uuid": "fbbcb607-a4c5-43f4-a5c6-fcb6d8e5be4b", 00:16:19.894 "is_configured": true, 00:16:19.894 "data_offset": 2048, 00:16:19.894 "data_size": 63488 00:16:19.894 } 00:16:19.894 ] 00:16:19.894 }' 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.894 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.154 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:20.154 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.154 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.154 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.154 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.154 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.414 [2024-11-18 04:05:16.829335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:20.414 [2024-11-18 04:05:16.829551] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.414 [2024-11-18 04:05:16.919226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.414 04:05:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.414 [2024-11-18 04:05:16.975119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.674 [2024-11-18 04:05:17.124238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:20.674 [2024-11-18 04:05:17.124325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.674 BaseBdev2 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.674 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.934 [ 00:16:20.934 { 00:16:20.934 "name": "BaseBdev2", 00:16:20.934 "aliases": [ 00:16:20.934 "c6ab80a5-a418-4530-9cc8-c822100e3f4b" 00:16:20.934 ], 00:16:20.934 "product_name": "Malloc disk", 00:16:20.934 "block_size": 512, 00:16:20.934 "num_blocks": 65536, 00:16:20.934 "uuid": "c6ab80a5-a418-4530-9cc8-c822100e3f4b", 00:16:20.934 "assigned_rate_limits": { 00:16:20.934 "rw_ios_per_sec": 0, 00:16:20.934 "rw_mbytes_per_sec": 0, 00:16:20.934 "r_mbytes_per_sec": 0, 00:16:20.934 "w_mbytes_per_sec": 0 00:16:20.934 }, 00:16:20.934 "claimed": false, 00:16:20.934 "zoned": false, 00:16:20.934 "supported_io_types": { 00:16:20.934 "read": true, 00:16:20.934 "write": true, 00:16:20.934 "unmap": true, 00:16:20.934 "flush": true, 00:16:20.934 "reset": true, 00:16:20.934 "nvme_admin": false, 00:16:20.934 "nvme_io": false, 00:16:20.934 "nvme_io_md": false, 00:16:20.934 "write_zeroes": true, 00:16:20.934 "zcopy": true, 00:16:20.934 "get_zone_info": false, 00:16:20.934 "zone_management": false, 00:16:20.934 "zone_append": false, 00:16:20.934 "compare": false, 00:16:20.934 "compare_and_write": false, 00:16:20.934 "abort": true, 00:16:20.934 "seek_hole": false, 00:16:20.934 "seek_data": false, 00:16:20.934 "copy": true, 00:16:20.934 "nvme_iov_md": false 00:16:20.934 }, 00:16:20.934 "memory_domains": [ 00:16:20.934 { 00:16:20.934 "dma_device_id": "system", 00:16:20.934 "dma_device_type": 1 00:16:20.934 }, 00:16:20.934 { 00:16:20.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.934 "dma_device_type": 2 00:16:20.934 } 00:16:20.934 ], 00:16:20.934 "driver_specific": {} 00:16:20.934 } 00:16:20.934 ] 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.934 BaseBdev3 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.934 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.934 [ 00:16:20.934 { 00:16:20.935 "name": "BaseBdev3", 00:16:20.935 "aliases": [ 00:16:20.935 "d1036d09-7e5a-4f0b-baa3-1151740158b3" 00:16:20.935 ], 00:16:20.935 "product_name": "Malloc disk", 00:16:20.935 "block_size": 512, 00:16:20.935 "num_blocks": 65536, 00:16:20.935 "uuid": "d1036d09-7e5a-4f0b-baa3-1151740158b3", 00:16:20.935 "assigned_rate_limits": { 00:16:20.935 "rw_ios_per_sec": 0, 00:16:20.935 "rw_mbytes_per_sec": 0, 00:16:20.935 "r_mbytes_per_sec": 0, 00:16:20.935 "w_mbytes_per_sec": 0 00:16:20.935 }, 00:16:20.935 "claimed": false, 00:16:20.935 "zoned": false, 00:16:20.935 "supported_io_types": { 00:16:20.935 "read": true, 00:16:20.935 "write": true, 00:16:20.935 "unmap": true, 00:16:20.935 "flush": true, 00:16:20.935 "reset": true, 00:16:20.935 "nvme_admin": false, 00:16:20.935 "nvme_io": false, 00:16:20.935 "nvme_io_md": false, 00:16:20.935 "write_zeroes": true, 00:16:20.935 "zcopy": true, 00:16:20.935 "get_zone_info": false, 00:16:20.935 "zone_management": false, 00:16:20.935 "zone_append": false, 00:16:20.935 "compare": false, 00:16:20.935 "compare_and_write": false, 00:16:20.935 "abort": true, 00:16:20.935 "seek_hole": false, 00:16:20.935 "seek_data": false, 00:16:20.935 "copy": true, 00:16:20.935 "nvme_iov_md": false 00:16:20.935 }, 00:16:20.935 "memory_domains": [ 00:16:20.935 { 00:16:20.935 "dma_device_id": "system", 00:16:20.935 "dma_device_type": 1 00:16:20.935 }, 00:16:20.935 { 00:16:20.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.935 "dma_device_type": 2 00:16:20.935 } 00:16:20.935 ], 00:16:20.935 "driver_specific": {} 00:16:20.935 } 00:16:20.935 ] 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.935 BaseBdev4 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.935 [ 00:16:20.935 { 00:16:20.935 "name": "BaseBdev4", 00:16:20.935 "aliases": [ 00:16:20.935 "89d24e2a-0a7e-4bdd-ab83-94f60259d80c" 00:16:20.935 ], 00:16:20.935 "product_name": "Malloc disk", 00:16:20.935 "block_size": 512, 00:16:20.935 "num_blocks": 65536, 00:16:20.935 "uuid": "89d24e2a-0a7e-4bdd-ab83-94f60259d80c", 00:16:20.935 "assigned_rate_limits": { 00:16:20.935 "rw_ios_per_sec": 0, 00:16:20.935 "rw_mbytes_per_sec": 0, 00:16:20.935 "r_mbytes_per_sec": 0, 00:16:20.935 "w_mbytes_per_sec": 0 00:16:20.935 }, 00:16:20.935 "claimed": false, 00:16:20.935 "zoned": false, 00:16:20.935 "supported_io_types": { 00:16:20.935 "read": true, 00:16:20.935 "write": true, 00:16:20.935 "unmap": true, 00:16:20.935 "flush": true, 00:16:20.935 "reset": true, 00:16:20.935 "nvme_admin": false, 00:16:20.935 "nvme_io": false, 00:16:20.935 "nvme_io_md": false, 00:16:20.935 "write_zeroes": true, 00:16:20.935 "zcopy": true, 00:16:20.935 "get_zone_info": false, 00:16:20.935 "zone_management": false, 00:16:20.935 "zone_append": false, 00:16:20.935 "compare": false, 00:16:20.935 "compare_and_write": false, 00:16:20.935 "abort": true, 00:16:20.935 "seek_hole": false, 00:16:20.935 "seek_data": false, 00:16:20.935 "copy": true, 00:16:20.935 "nvme_iov_md": false 00:16:20.935 }, 00:16:20.935 "memory_domains": [ 00:16:20.935 { 00:16:20.935 "dma_device_id": "system", 00:16:20.935 "dma_device_type": 1 00:16:20.935 }, 00:16:20.935 { 00:16:20.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.935 "dma_device_type": 2 00:16:20.935 } 00:16:20.935 ], 00:16:20.935 "driver_specific": {} 00:16:20.935 } 00:16:20.935 ] 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.935 [2024-11-18 04:05:17.500960] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.935 [2024-11-18 04:05:17.501043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.935 [2024-11-18 04:05:17.501084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.935 [2024-11-18 04:05:17.502771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:20.935 [2024-11-18 04:05:17.502870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.935 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.935 "name": "Existed_Raid", 00:16:20.935 "uuid": "c5c32a14-b004-46c4-910c-7d6fc91f154e", 00:16:20.935 "strip_size_kb": 64, 00:16:20.935 "state": "configuring", 00:16:20.935 "raid_level": "raid5f", 00:16:20.935 "superblock": true, 00:16:20.935 "num_base_bdevs": 4, 00:16:20.935 "num_base_bdevs_discovered": 3, 00:16:20.935 "num_base_bdevs_operational": 4, 00:16:20.935 "base_bdevs_list": [ 00:16:20.935 { 00:16:20.935 "name": "BaseBdev1", 00:16:20.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.935 "is_configured": false, 00:16:20.935 "data_offset": 0, 00:16:20.935 "data_size": 0 00:16:20.935 }, 00:16:20.935 { 00:16:20.935 "name": "BaseBdev2", 00:16:20.935 "uuid": "c6ab80a5-a418-4530-9cc8-c822100e3f4b", 00:16:20.935 "is_configured": true, 00:16:20.935 "data_offset": 2048, 00:16:20.935 "data_size": 63488 00:16:20.935 }, 00:16:20.935 { 00:16:20.935 "name": "BaseBdev3", 00:16:20.935 "uuid": "d1036d09-7e5a-4f0b-baa3-1151740158b3", 00:16:20.935 "is_configured": true, 00:16:20.935 "data_offset": 2048, 00:16:20.935 "data_size": 63488 00:16:20.935 }, 00:16:20.935 { 00:16:20.935 "name": "BaseBdev4", 00:16:20.935 "uuid": "89d24e2a-0a7e-4bdd-ab83-94f60259d80c", 00:16:20.935 "is_configured": true, 00:16:20.935 "data_offset": 2048, 00:16:20.935 "data_size": 63488 00:16:20.935 } 00:16:20.935 ] 00:16:20.935 }' 00:16:20.936 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.936 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.503 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:21.503 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.503 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.503 [2024-11-18 04:05:17.952190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.504 04:05:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.504 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.504 "name": "Existed_Raid", 00:16:21.504 "uuid": "c5c32a14-b004-46c4-910c-7d6fc91f154e", 00:16:21.504 "strip_size_kb": 64, 00:16:21.504 "state": "configuring", 00:16:21.504 "raid_level": "raid5f", 00:16:21.504 "superblock": true, 00:16:21.504 "num_base_bdevs": 4, 00:16:21.504 "num_base_bdevs_discovered": 2, 00:16:21.504 "num_base_bdevs_operational": 4, 00:16:21.504 "base_bdevs_list": [ 00:16:21.504 { 00:16:21.504 "name": "BaseBdev1", 00:16:21.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.504 "is_configured": false, 00:16:21.504 "data_offset": 0, 00:16:21.504 "data_size": 0 00:16:21.504 }, 00:16:21.504 { 00:16:21.504 "name": null, 00:16:21.504 "uuid": "c6ab80a5-a418-4530-9cc8-c822100e3f4b", 00:16:21.504 "is_configured": false, 00:16:21.504 "data_offset": 0, 00:16:21.504 "data_size": 63488 00:16:21.504 }, 00:16:21.504 { 00:16:21.504 "name": "BaseBdev3", 00:16:21.504 "uuid": "d1036d09-7e5a-4f0b-baa3-1151740158b3", 00:16:21.504 "is_configured": true, 00:16:21.504 "data_offset": 2048, 00:16:21.504 "data_size": 63488 00:16:21.504 }, 00:16:21.504 { 00:16:21.504 "name": "BaseBdev4", 00:16:21.504 "uuid": "89d24e2a-0a7e-4bdd-ab83-94f60259d80c", 00:16:21.504 "is_configured": true, 00:16:21.504 "data_offset": 2048, 00:16:21.504 "data_size": 63488 00:16:21.504 } 00:16:21.504 ] 00:16:21.504 }' 00:16:21.504 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.504 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.767 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.767 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.767 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.768 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:21.768 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.030 [2024-11-18 04:05:18.465296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.030 BaseBdev1 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.030 [ 00:16:22.030 { 00:16:22.030 "name": "BaseBdev1", 00:16:22.030 "aliases": [ 00:16:22.030 "de8a2467-da26-45aa-a369-4d63cb2d1ecf" 00:16:22.030 ], 00:16:22.030 "product_name": "Malloc disk", 00:16:22.030 "block_size": 512, 00:16:22.030 "num_blocks": 65536, 00:16:22.030 "uuid": "de8a2467-da26-45aa-a369-4d63cb2d1ecf", 00:16:22.030 "assigned_rate_limits": { 00:16:22.030 "rw_ios_per_sec": 0, 00:16:22.030 "rw_mbytes_per_sec": 0, 00:16:22.030 "r_mbytes_per_sec": 0, 00:16:22.030 "w_mbytes_per_sec": 0 00:16:22.030 }, 00:16:22.030 "claimed": true, 00:16:22.030 "claim_type": "exclusive_write", 00:16:22.030 "zoned": false, 00:16:22.030 "supported_io_types": { 00:16:22.030 "read": true, 00:16:22.030 "write": true, 00:16:22.030 "unmap": true, 00:16:22.030 "flush": true, 00:16:22.030 "reset": true, 00:16:22.030 "nvme_admin": false, 00:16:22.030 "nvme_io": false, 00:16:22.030 "nvme_io_md": false, 00:16:22.030 "write_zeroes": true, 00:16:22.030 "zcopy": true, 00:16:22.030 "get_zone_info": false, 00:16:22.030 "zone_management": false, 00:16:22.030 "zone_append": false, 00:16:22.030 "compare": false, 00:16:22.030 "compare_and_write": false, 00:16:22.030 "abort": true, 00:16:22.030 "seek_hole": false, 00:16:22.030 "seek_data": false, 00:16:22.030 "copy": true, 00:16:22.030 "nvme_iov_md": false 00:16:22.030 }, 00:16:22.030 "memory_domains": [ 00:16:22.030 { 00:16:22.030 "dma_device_id": "system", 00:16:22.030 "dma_device_type": 1 00:16:22.030 }, 00:16:22.030 { 00:16:22.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.030 "dma_device_type": 2 00:16:22.030 } 00:16:22.030 ], 00:16:22.030 "driver_specific": {} 00:16:22.030 } 00:16:22.030 ] 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.030 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.030 "name": "Existed_Raid", 00:16:22.030 "uuid": "c5c32a14-b004-46c4-910c-7d6fc91f154e", 00:16:22.030 "strip_size_kb": 64, 00:16:22.030 "state": "configuring", 00:16:22.030 "raid_level": "raid5f", 00:16:22.030 "superblock": true, 00:16:22.030 "num_base_bdevs": 4, 00:16:22.030 "num_base_bdevs_discovered": 3, 00:16:22.030 "num_base_bdevs_operational": 4, 00:16:22.030 "base_bdevs_list": [ 00:16:22.030 { 00:16:22.030 "name": "BaseBdev1", 00:16:22.030 "uuid": "de8a2467-da26-45aa-a369-4d63cb2d1ecf", 00:16:22.030 "is_configured": true, 00:16:22.030 "data_offset": 2048, 00:16:22.030 "data_size": 63488 00:16:22.030 }, 00:16:22.030 { 00:16:22.030 "name": null, 00:16:22.030 "uuid": "c6ab80a5-a418-4530-9cc8-c822100e3f4b", 00:16:22.030 "is_configured": false, 00:16:22.030 "data_offset": 0, 00:16:22.030 "data_size": 63488 00:16:22.030 }, 00:16:22.030 { 00:16:22.030 "name": "BaseBdev3", 00:16:22.031 "uuid": "d1036d09-7e5a-4f0b-baa3-1151740158b3", 00:16:22.031 "is_configured": true, 00:16:22.031 "data_offset": 2048, 00:16:22.031 "data_size": 63488 00:16:22.031 }, 00:16:22.031 { 00:16:22.031 "name": "BaseBdev4", 00:16:22.031 "uuid": "89d24e2a-0a7e-4bdd-ab83-94f60259d80c", 00:16:22.031 "is_configured": true, 00:16:22.031 "data_offset": 2048, 00:16:22.031 "data_size": 63488 00:16:22.031 } 00:16:22.031 ] 00:16:22.031 }' 00:16:22.031 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.031 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.600 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.600 04:05:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:22.600 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.600 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.600 04:05:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.600 [2024-11-18 04:05:19.012406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.600 "name": "Existed_Raid", 00:16:22.600 "uuid": "c5c32a14-b004-46c4-910c-7d6fc91f154e", 00:16:22.600 "strip_size_kb": 64, 00:16:22.600 "state": "configuring", 00:16:22.600 "raid_level": "raid5f", 00:16:22.600 "superblock": true, 00:16:22.600 "num_base_bdevs": 4, 00:16:22.600 "num_base_bdevs_discovered": 2, 00:16:22.600 "num_base_bdevs_operational": 4, 00:16:22.600 "base_bdevs_list": [ 00:16:22.600 { 00:16:22.600 "name": "BaseBdev1", 00:16:22.600 "uuid": "de8a2467-da26-45aa-a369-4d63cb2d1ecf", 00:16:22.600 "is_configured": true, 00:16:22.600 "data_offset": 2048, 00:16:22.600 "data_size": 63488 00:16:22.600 }, 00:16:22.600 { 00:16:22.600 "name": null, 00:16:22.600 "uuid": "c6ab80a5-a418-4530-9cc8-c822100e3f4b", 00:16:22.600 "is_configured": false, 00:16:22.600 "data_offset": 0, 00:16:22.600 "data_size": 63488 00:16:22.600 }, 00:16:22.600 { 00:16:22.600 "name": null, 00:16:22.600 "uuid": "d1036d09-7e5a-4f0b-baa3-1151740158b3", 00:16:22.600 "is_configured": false, 00:16:22.600 "data_offset": 0, 00:16:22.600 "data_size": 63488 00:16:22.600 }, 00:16:22.600 { 00:16:22.600 "name": "BaseBdev4", 00:16:22.600 "uuid": "89d24e2a-0a7e-4bdd-ab83-94f60259d80c", 00:16:22.600 "is_configured": true, 00:16:22.600 "data_offset": 2048, 00:16:22.600 "data_size": 63488 00:16:22.600 } 00:16:22.600 ] 00:16:22.600 }' 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.600 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.860 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:22.860 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.860 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.860 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.860 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.120 [2024-11-18 04:05:19.507752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.120 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.121 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.121 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.121 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.121 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.121 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.121 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.121 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.121 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.121 "name": "Existed_Raid", 00:16:23.121 "uuid": "c5c32a14-b004-46c4-910c-7d6fc91f154e", 00:16:23.121 "strip_size_kb": 64, 00:16:23.121 "state": "configuring", 00:16:23.121 "raid_level": "raid5f", 00:16:23.121 "superblock": true, 00:16:23.121 "num_base_bdevs": 4, 00:16:23.121 "num_base_bdevs_discovered": 3, 00:16:23.121 "num_base_bdevs_operational": 4, 00:16:23.121 "base_bdevs_list": [ 00:16:23.121 { 00:16:23.121 "name": "BaseBdev1", 00:16:23.121 "uuid": "de8a2467-da26-45aa-a369-4d63cb2d1ecf", 00:16:23.121 "is_configured": true, 00:16:23.121 "data_offset": 2048, 00:16:23.121 "data_size": 63488 00:16:23.121 }, 00:16:23.121 { 00:16:23.121 "name": null, 00:16:23.121 "uuid": "c6ab80a5-a418-4530-9cc8-c822100e3f4b", 00:16:23.121 "is_configured": false, 00:16:23.121 "data_offset": 0, 00:16:23.121 "data_size": 63488 00:16:23.121 }, 00:16:23.121 { 00:16:23.121 "name": "BaseBdev3", 00:16:23.121 "uuid": "d1036d09-7e5a-4f0b-baa3-1151740158b3", 00:16:23.121 "is_configured": true, 00:16:23.121 "data_offset": 2048, 00:16:23.121 "data_size": 63488 00:16:23.121 }, 00:16:23.121 { 00:16:23.121 "name": "BaseBdev4", 00:16:23.121 "uuid": "89d24e2a-0a7e-4bdd-ab83-94f60259d80c", 00:16:23.121 "is_configured": true, 00:16:23.121 "data_offset": 2048, 00:16:23.121 "data_size": 63488 00:16:23.121 } 00:16:23.121 ] 00:16:23.121 }' 00:16:23.121 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.121 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.381 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.381 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.381 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.381 04:05:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:23.381 04:05:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.641 [2024-11-18 04:05:20.042855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.641 "name": "Existed_Raid", 00:16:23.641 "uuid": "c5c32a14-b004-46c4-910c-7d6fc91f154e", 00:16:23.641 "strip_size_kb": 64, 00:16:23.641 "state": "configuring", 00:16:23.641 "raid_level": "raid5f", 00:16:23.641 "superblock": true, 00:16:23.641 "num_base_bdevs": 4, 00:16:23.641 "num_base_bdevs_discovered": 2, 00:16:23.641 "num_base_bdevs_operational": 4, 00:16:23.641 "base_bdevs_list": [ 00:16:23.641 { 00:16:23.641 "name": null, 00:16:23.641 "uuid": "de8a2467-da26-45aa-a369-4d63cb2d1ecf", 00:16:23.641 "is_configured": false, 00:16:23.641 "data_offset": 0, 00:16:23.641 "data_size": 63488 00:16:23.641 }, 00:16:23.641 { 00:16:23.641 "name": null, 00:16:23.641 "uuid": "c6ab80a5-a418-4530-9cc8-c822100e3f4b", 00:16:23.641 "is_configured": false, 00:16:23.641 "data_offset": 0, 00:16:23.641 "data_size": 63488 00:16:23.641 }, 00:16:23.641 { 00:16:23.641 "name": "BaseBdev3", 00:16:23.641 "uuid": "d1036d09-7e5a-4f0b-baa3-1151740158b3", 00:16:23.641 "is_configured": true, 00:16:23.641 "data_offset": 2048, 00:16:23.641 "data_size": 63488 00:16:23.641 }, 00:16:23.641 { 00:16:23.641 "name": "BaseBdev4", 00:16:23.641 "uuid": "89d24e2a-0a7e-4bdd-ab83-94f60259d80c", 00:16:23.641 "is_configured": true, 00:16:23.641 "data_offset": 2048, 00:16:23.641 "data_size": 63488 00:16:23.641 } 00:16:23.641 ] 00:16:23.641 }' 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.641 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.212 [2024-11-18 04:05:20.694346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.212 "name": "Existed_Raid", 00:16:24.212 "uuid": "c5c32a14-b004-46c4-910c-7d6fc91f154e", 00:16:24.212 "strip_size_kb": 64, 00:16:24.212 "state": "configuring", 00:16:24.212 "raid_level": "raid5f", 00:16:24.212 "superblock": true, 00:16:24.212 "num_base_bdevs": 4, 00:16:24.212 "num_base_bdevs_discovered": 3, 00:16:24.212 "num_base_bdevs_operational": 4, 00:16:24.212 "base_bdevs_list": [ 00:16:24.212 { 00:16:24.212 "name": null, 00:16:24.212 "uuid": "de8a2467-da26-45aa-a369-4d63cb2d1ecf", 00:16:24.212 "is_configured": false, 00:16:24.212 "data_offset": 0, 00:16:24.212 "data_size": 63488 00:16:24.212 }, 00:16:24.212 { 00:16:24.212 "name": "BaseBdev2", 00:16:24.212 "uuid": "c6ab80a5-a418-4530-9cc8-c822100e3f4b", 00:16:24.212 "is_configured": true, 00:16:24.212 "data_offset": 2048, 00:16:24.212 "data_size": 63488 00:16:24.212 }, 00:16:24.212 { 00:16:24.212 "name": "BaseBdev3", 00:16:24.212 "uuid": "d1036d09-7e5a-4f0b-baa3-1151740158b3", 00:16:24.212 "is_configured": true, 00:16:24.212 "data_offset": 2048, 00:16:24.212 "data_size": 63488 00:16:24.212 }, 00:16:24.212 { 00:16:24.212 "name": "BaseBdev4", 00:16:24.212 "uuid": "89d24e2a-0a7e-4bdd-ab83-94f60259d80c", 00:16:24.212 "is_configured": true, 00:16:24.212 "data_offset": 2048, 00:16:24.212 "data_size": 63488 00:16:24.212 } 00:16:24.212 ] 00:16:24.212 }' 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.212 04:05:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u de8a2467-da26-45aa-a369-4d63cb2d1ecf 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.781 [2024-11-18 04:05:21.296266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:24.781 [2024-11-18 04:05:21.296563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:24.781 [2024-11-18 04:05:21.296608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:24.781 [2024-11-18 04:05:21.296894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:24.781 NewBaseBdev 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.781 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.781 [2024-11-18 04:05:21.303752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:24.782 [2024-11-18 04:05:21.303804] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:24.782 [2024-11-18 04:05:21.304099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.782 [ 00:16:24.782 { 00:16:24.782 "name": "NewBaseBdev", 00:16:24.782 "aliases": [ 00:16:24.782 "de8a2467-da26-45aa-a369-4d63cb2d1ecf" 00:16:24.782 ], 00:16:24.782 "product_name": "Malloc disk", 00:16:24.782 "block_size": 512, 00:16:24.782 "num_blocks": 65536, 00:16:24.782 "uuid": "de8a2467-da26-45aa-a369-4d63cb2d1ecf", 00:16:24.782 "assigned_rate_limits": { 00:16:24.782 "rw_ios_per_sec": 0, 00:16:24.782 "rw_mbytes_per_sec": 0, 00:16:24.782 "r_mbytes_per_sec": 0, 00:16:24.782 "w_mbytes_per_sec": 0 00:16:24.782 }, 00:16:24.782 "claimed": true, 00:16:24.782 "claim_type": "exclusive_write", 00:16:24.782 "zoned": false, 00:16:24.782 "supported_io_types": { 00:16:24.782 "read": true, 00:16:24.782 "write": true, 00:16:24.782 "unmap": true, 00:16:24.782 "flush": true, 00:16:24.782 "reset": true, 00:16:24.782 "nvme_admin": false, 00:16:24.782 "nvme_io": false, 00:16:24.782 "nvme_io_md": false, 00:16:24.782 "write_zeroes": true, 00:16:24.782 "zcopy": true, 00:16:24.782 "get_zone_info": false, 00:16:24.782 "zone_management": false, 00:16:24.782 "zone_append": false, 00:16:24.782 "compare": false, 00:16:24.782 "compare_and_write": false, 00:16:24.782 "abort": true, 00:16:24.782 "seek_hole": false, 00:16:24.782 "seek_data": false, 00:16:24.782 "copy": true, 00:16:24.782 "nvme_iov_md": false 00:16:24.782 }, 00:16:24.782 "memory_domains": [ 00:16:24.782 { 00:16:24.782 "dma_device_id": "system", 00:16:24.782 "dma_device_type": 1 00:16:24.782 }, 00:16:24.782 { 00:16:24.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.782 "dma_device_type": 2 00:16:24.782 } 00:16:24.782 ], 00:16:24.782 "driver_specific": {} 00:16:24.782 } 00:16:24.782 ] 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.782 "name": "Existed_Raid", 00:16:24.782 "uuid": "c5c32a14-b004-46c4-910c-7d6fc91f154e", 00:16:24.782 "strip_size_kb": 64, 00:16:24.782 "state": "online", 00:16:24.782 "raid_level": "raid5f", 00:16:24.782 "superblock": true, 00:16:24.782 "num_base_bdevs": 4, 00:16:24.782 "num_base_bdevs_discovered": 4, 00:16:24.782 "num_base_bdevs_operational": 4, 00:16:24.782 "base_bdevs_list": [ 00:16:24.782 { 00:16:24.782 "name": "NewBaseBdev", 00:16:24.782 "uuid": "de8a2467-da26-45aa-a369-4d63cb2d1ecf", 00:16:24.782 "is_configured": true, 00:16:24.782 "data_offset": 2048, 00:16:24.782 "data_size": 63488 00:16:24.782 }, 00:16:24.782 { 00:16:24.782 "name": "BaseBdev2", 00:16:24.782 "uuid": "c6ab80a5-a418-4530-9cc8-c822100e3f4b", 00:16:24.782 "is_configured": true, 00:16:24.782 "data_offset": 2048, 00:16:24.782 "data_size": 63488 00:16:24.782 }, 00:16:24.782 { 00:16:24.782 "name": "BaseBdev3", 00:16:24.782 "uuid": "d1036d09-7e5a-4f0b-baa3-1151740158b3", 00:16:24.782 "is_configured": true, 00:16:24.782 "data_offset": 2048, 00:16:24.782 "data_size": 63488 00:16:24.782 }, 00:16:24.782 { 00:16:24.782 "name": "BaseBdev4", 00:16:24.782 "uuid": "89d24e2a-0a7e-4bdd-ab83-94f60259d80c", 00:16:24.782 "is_configured": true, 00:16:24.782 "data_offset": 2048, 00:16:24.782 "data_size": 63488 00:16:24.782 } 00:16:24.782 ] 00:16:24.782 }' 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.782 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:25.351 [2024-11-18 04:05:21.807547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:25.351 "name": "Existed_Raid", 00:16:25.351 "aliases": [ 00:16:25.351 "c5c32a14-b004-46c4-910c-7d6fc91f154e" 00:16:25.351 ], 00:16:25.351 "product_name": "Raid Volume", 00:16:25.351 "block_size": 512, 00:16:25.351 "num_blocks": 190464, 00:16:25.351 "uuid": "c5c32a14-b004-46c4-910c-7d6fc91f154e", 00:16:25.351 "assigned_rate_limits": { 00:16:25.351 "rw_ios_per_sec": 0, 00:16:25.351 "rw_mbytes_per_sec": 0, 00:16:25.351 "r_mbytes_per_sec": 0, 00:16:25.351 "w_mbytes_per_sec": 0 00:16:25.351 }, 00:16:25.351 "claimed": false, 00:16:25.351 "zoned": false, 00:16:25.351 "supported_io_types": { 00:16:25.351 "read": true, 00:16:25.351 "write": true, 00:16:25.351 "unmap": false, 00:16:25.351 "flush": false, 00:16:25.351 "reset": true, 00:16:25.351 "nvme_admin": false, 00:16:25.351 "nvme_io": false, 00:16:25.351 "nvme_io_md": false, 00:16:25.351 "write_zeroes": true, 00:16:25.351 "zcopy": false, 00:16:25.351 "get_zone_info": false, 00:16:25.351 "zone_management": false, 00:16:25.351 "zone_append": false, 00:16:25.351 "compare": false, 00:16:25.351 "compare_and_write": false, 00:16:25.351 "abort": false, 00:16:25.351 "seek_hole": false, 00:16:25.351 "seek_data": false, 00:16:25.351 "copy": false, 00:16:25.351 "nvme_iov_md": false 00:16:25.351 }, 00:16:25.351 "driver_specific": { 00:16:25.351 "raid": { 00:16:25.351 "uuid": "c5c32a14-b004-46c4-910c-7d6fc91f154e", 00:16:25.351 "strip_size_kb": 64, 00:16:25.351 "state": "online", 00:16:25.351 "raid_level": "raid5f", 00:16:25.351 "superblock": true, 00:16:25.351 "num_base_bdevs": 4, 00:16:25.351 "num_base_bdevs_discovered": 4, 00:16:25.351 "num_base_bdevs_operational": 4, 00:16:25.351 "base_bdevs_list": [ 00:16:25.351 { 00:16:25.351 "name": "NewBaseBdev", 00:16:25.351 "uuid": "de8a2467-da26-45aa-a369-4d63cb2d1ecf", 00:16:25.351 "is_configured": true, 00:16:25.351 "data_offset": 2048, 00:16:25.351 "data_size": 63488 00:16:25.351 }, 00:16:25.351 { 00:16:25.351 "name": "BaseBdev2", 00:16:25.351 "uuid": "c6ab80a5-a418-4530-9cc8-c822100e3f4b", 00:16:25.351 "is_configured": true, 00:16:25.351 "data_offset": 2048, 00:16:25.351 "data_size": 63488 00:16:25.351 }, 00:16:25.351 { 00:16:25.351 "name": "BaseBdev3", 00:16:25.351 "uuid": "d1036d09-7e5a-4f0b-baa3-1151740158b3", 00:16:25.351 "is_configured": true, 00:16:25.351 "data_offset": 2048, 00:16:25.351 "data_size": 63488 00:16:25.351 }, 00:16:25.351 { 00:16:25.351 "name": "BaseBdev4", 00:16:25.351 "uuid": "89d24e2a-0a7e-4bdd-ab83-94f60259d80c", 00:16:25.351 "is_configured": true, 00:16:25.351 "data_offset": 2048, 00:16:25.351 "data_size": 63488 00:16:25.351 } 00:16:25.351 ] 00:16:25.351 } 00:16:25.351 } 00:16:25.351 }' 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.351 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:25.352 BaseBdev2 00:16:25.352 BaseBdev3 00:16:25.352 BaseBdev4' 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.352 04:05:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.613 [2024-11-18 04:05:22.114837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.613 [2024-11-18 04:05:22.114897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.613 [2024-11-18 04:05:22.114992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.613 [2024-11-18 04:05:22.115279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.613 [2024-11-18 04:05:22.115330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83328 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83328 ']' 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83328 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83328 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83328' 00:16:25.613 killing process with pid 83328 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83328 00:16:25.613 [2024-11-18 04:05:22.162532] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.613 04:05:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83328 00:16:26.182 [2024-11-18 04:05:22.529036] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.121 04:05:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:27.121 00:16:27.121 real 0m11.600s 00:16:27.121 user 0m18.605s 00:16:27.121 sys 0m2.140s 00:16:27.121 04:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.121 04:05:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.121 ************************************ 00:16:27.121 END TEST raid5f_state_function_test_sb 00:16:27.121 ************************************ 00:16:27.121 04:05:23 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:27.121 04:05:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:27.121 04:05:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.121 04:05:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.121 ************************************ 00:16:27.121 START TEST raid5f_superblock_test 00:16:27.121 ************************************ 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83999 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83999 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83999 ']' 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.121 04:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.122 04:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.122 04:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.122 04:05:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.122 [2024-11-18 04:05:23.714854] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:27.122 [2024-11-18 04:05:23.715043] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83999 ] 00:16:27.382 [2024-11-18 04:05:23.887670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.382 [2024-11-18 04:05:23.992896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.642 [2024-11-18 04:05:24.187927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.642 [2024-11-18 04:05:24.188044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.902 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.162 malloc1 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.162 [2024-11-18 04:05:24.569315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:28.162 [2024-11-18 04:05:24.569436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.162 [2024-11-18 04:05:24.569493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:28.162 [2024-11-18 04:05:24.569526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.162 [2024-11-18 04:05:24.571532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.162 [2024-11-18 04:05:24.571598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:28.162 pt1 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.162 malloc2 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.162 [2024-11-18 04:05:24.621208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.162 [2024-11-18 04:05:24.621313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.162 [2024-11-18 04:05:24.621349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:28.162 [2024-11-18 04:05:24.621395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.162 [2024-11-18 04:05:24.623390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.162 [2024-11-18 04:05:24.623454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.162 pt2 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.162 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.163 malloc3 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.163 [2024-11-18 04:05:24.713484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:28.163 [2024-11-18 04:05:24.713533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.163 [2024-11-18 04:05:24.713568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:28.163 [2024-11-18 04:05:24.713577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.163 [2024-11-18 04:05:24.715594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.163 [2024-11-18 04:05:24.715632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:28.163 pt3 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.163 malloc4 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.163 [2024-11-18 04:05:24.764981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:28.163 [2024-11-18 04:05:24.765072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.163 [2024-11-18 04:05:24.765107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:28.163 [2024-11-18 04:05:24.765138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.163 [2024-11-18 04:05:24.767202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.163 [2024-11-18 04:05:24.767283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:28.163 pt4 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.163 [2024-11-18 04:05:24.776990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:28.163 [2024-11-18 04:05:24.778786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.163 [2024-11-18 04:05:24.778906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:28.163 [2024-11-18 04:05:24.778984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:28.163 [2024-11-18 04:05:24.779245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:28.163 [2024-11-18 04:05:24.779292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:28.163 [2024-11-18 04:05:24.779540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:28.163 [2024-11-18 04:05:24.786812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:28.163 [2024-11-18 04:05:24.786873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:28.163 [2024-11-18 04:05:24.787091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.163 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.423 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.424 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.424 "name": "raid_bdev1", 00:16:28.424 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:28.424 "strip_size_kb": 64, 00:16:28.424 "state": "online", 00:16:28.424 "raid_level": "raid5f", 00:16:28.424 "superblock": true, 00:16:28.424 "num_base_bdevs": 4, 00:16:28.424 "num_base_bdevs_discovered": 4, 00:16:28.424 "num_base_bdevs_operational": 4, 00:16:28.424 "base_bdevs_list": [ 00:16:28.424 { 00:16:28.424 "name": "pt1", 00:16:28.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.424 "is_configured": true, 00:16:28.424 "data_offset": 2048, 00:16:28.424 "data_size": 63488 00:16:28.424 }, 00:16:28.424 { 00:16:28.424 "name": "pt2", 00:16:28.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.424 "is_configured": true, 00:16:28.424 "data_offset": 2048, 00:16:28.424 "data_size": 63488 00:16:28.424 }, 00:16:28.424 { 00:16:28.424 "name": "pt3", 00:16:28.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.424 "is_configured": true, 00:16:28.424 "data_offset": 2048, 00:16:28.424 "data_size": 63488 00:16:28.424 }, 00:16:28.424 { 00:16:28.424 "name": "pt4", 00:16:28.424 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.424 "is_configured": true, 00:16:28.424 "data_offset": 2048, 00:16:28.424 "data_size": 63488 00:16:28.424 } 00:16:28.424 ] 00:16:28.424 }' 00:16:28.424 04:05:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.424 04:05:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.684 [2024-11-18 04:05:25.246629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.684 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:28.684 "name": "raid_bdev1", 00:16:28.684 "aliases": [ 00:16:28.684 "ade01a2a-8712-4d8b-9730-6ec5140e74f5" 00:16:28.684 ], 00:16:28.684 "product_name": "Raid Volume", 00:16:28.684 "block_size": 512, 00:16:28.684 "num_blocks": 190464, 00:16:28.684 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:28.684 "assigned_rate_limits": { 00:16:28.684 "rw_ios_per_sec": 0, 00:16:28.685 "rw_mbytes_per_sec": 0, 00:16:28.685 "r_mbytes_per_sec": 0, 00:16:28.685 "w_mbytes_per_sec": 0 00:16:28.685 }, 00:16:28.685 "claimed": false, 00:16:28.685 "zoned": false, 00:16:28.685 "supported_io_types": { 00:16:28.685 "read": true, 00:16:28.685 "write": true, 00:16:28.685 "unmap": false, 00:16:28.685 "flush": false, 00:16:28.685 "reset": true, 00:16:28.685 "nvme_admin": false, 00:16:28.685 "nvme_io": false, 00:16:28.685 "nvme_io_md": false, 00:16:28.685 "write_zeroes": true, 00:16:28.685 "zcopy": false, 00:16:28.685 "get_zone_info": false, 00:16:28.685 "zone_management": false, 00:16:28.685 "zone_append": false, 00:16:28.685 "compare": false, 00:16:28.685 "compare_and_write": false, 00:16:28.685 "abort": false, 00:16:28.685 "seek_hole": false, 00:16:28.685 "seek_data": false, 00:16:28.685 "copy": false, 00:16:28.685 "nvme_iov_md": false 00:16:28.685 }, 00:16:28.685 "driver_specific": { 00:16:28.685 "raid": { 00:16:28.685 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:28.685 "strip_size_kb": 64, 00:16:28.685 "state": "online", 00:16:28.685 "raid_level": "raid5f", 00:16:28.685 "superblock": true, 00:16:28.685 "num_base_bdevs": 4, 00:16:28.685 "num_base_bdevs_discovered": 4, 00:16:28.685 "num_base_bdevs_operational": 4, 00:16:28.685 "base_bdevs_list": [ 00:16:28.685 { 00:16:28.685 "name": "pt1", 00:16:28.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.685 "is_configured": true, 00:16:28.685 "data_offset": 2048, 00:16:28.685 "data_size": 63488 00:16:28.685 }, 00:16:28.685 { 00:16:28.685 "name": "pt2", 00:16:28.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.685 "is_configured": true, 00:16:28.685 "data_offset": 2048, 00:16:28.685 "data_size": 63488 00:16:28.685 }, 00:16:28.685 { 00:16:28.685 "name": "pt3", 00:16:28.685 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.685 "is_configured": true, 00:16:28.685 "data_offset": 2048, 00:16:28.685 "data_size": 63488 00:16:28.685 }, 00:16:28.685 { 00:16:28.685 "name": "pt4", 00:16:28.685 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.685 "is_configured": true, 00:16:28.685 "data_offset": 2048, 00:16:28.685 "data_size": 63488 00:16:28.685 } 00:16:28.685 ] 00:16:28.685 } 00:16:28.685 } 00:16:28.685 }' 00:16:28.685 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.945 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:28.945 pt2 00:16:28.945 pt3 00:16:28.945 pt4' 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.946 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.207 [2024-11-18 04:05:25.602003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ade01a2a-8712-4d8b-9730-6ec5140e74f5 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ade01a2a-8712-4d8b-9730-6ec5140e74f5 ']' 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.207 [2024-11-18 04:05:25.649754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.207 [2024-11-18 04:05:25.649810] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.207 [2024-11-18 04:05:25.649945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.207 [2024-11-18 04:05:25.650049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.207 [2024-11-18 04:05:25.650102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.207 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.208 [2024-11-18 04:05:25.813484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:29.208 [2024-11-18 04:05:25.815226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:29.208 [2024-11-18 04:05:25.815305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:29.208 [2024-11-18 04:05:25.815352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:29.208 [2024-11-18 04:05:25.815435] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:29.208 [2024-11-18 04:05:25.815502] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:29.208 [2024-11-18 04:05:25.815546] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:29.208 [2024-11-18 04:05:25.815587] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:29.208 [2024-11-18 04:05:25.815662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.208 [2024-11-18 04:05:25.815706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:29.208 request: 00:16:29.208 { 00:16:29.208 "name": "raid_bdev1", 00:16:29.208 "raid_level": "raid5f", 00:16:29.208 "base_bdevs": [ 00:16:29.208 "malloc1", 00:16:29.208 "malloc2", 00:16:29.208 "malloc3", 00:16:29.208 "malloc4" 00:16:29.208 ], 00:16:29.208 "strip_size_kb": 64, 00:16:29.208 "superblock": false, 00:16:29.208 "method": "bdev_raid_create", 00:16:29.208 "req_id": 1 00:16:29.208 } 00:16:29.208 Got JSON-RPC error response 00:16:29.208 response: 00:16:29.208 { 00:16:29.208 "code": -17, 00:16:29.208 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:29.208 } 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.208 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.469 [2024-11-18 04:05:25.881347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:29.469 [2024-11-18 04:05:25.881432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.469 [2024-11-18 04:05:25.881463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:29.469 [2024-11-18 04:05:25.881491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.469 [2024-11-18 04:05:25.883528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.469 [2024-11-18 04:05:25.883612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:29.469 [2024-11-18 04:05:25.883696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:29.469 [2024-11-18 04:05:25.883775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:29.469 pt1 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.469 "name": "raid_bdev1", 00:16:29.469 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:29.469 "strip_size_kb": 64, 00:16:29.469 "state": "configuring", 00:16:29.469 "raid_level": "raid5f", 00:16:29.469 "superblock": true, 00:16:29.469 "num_base_bdevs": 4, 00:16:29.469 "num_base_bdevs_discovered": 1, 00:16:29.469 "num_base_bdevs_operational": 4, 00:16:29.469 "base_bdevs_list": [ 00:16:29.469 { 00:16:29.469 "name": "pt1", 00:16:29.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.469 "is_configured": true, 00:16:29.469 "data_offset": 2048, 00:16:29.469 "data_size": 63488 00:16:29.469 }, 00:16:29.469 { 00:16:29.469 "name": null, 00:16:29.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.469 "is_configured": false, 00:16:29.469 "data_offset": 2048, 00:16:29.469 "data_size": 63488 00:16:29.469 }, 00:16:29.469 { 00:16:29.469 "name": null, 00:16:29.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.469 "is_configured": false, 00:16:29.469 "data_offset": 2048, 00:16:29.469 "data_size": 63488 00:16:29.469 }, 00:16:29.469 { 00:16:29.469 "name": null, 00:16:29.469 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:29.469 "is_configured": false, 00:16:29.469 "data_offset": 2048, 00:16:29.469 "data_size": 63488 00:16:29.469 } 00:16:29.469 ] 00:16:29.469 }' 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.469 04:05:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.730 [2024-11-18 04:05:26.344556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:29.730 [2024-11-18 04:05:26.344658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.730 [2024-11-18 04:05:26.344689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:29.730 [2024-11-18 04:05:26.344734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.730 [2024-11-18 04:05:26.345117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.730 [2024-11-18 04:05:26.345171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:29.730 [2024-11-18 04:05:26.345255] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:29.730 [2024-11-18 04:05:26.345301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.730 pt2 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.730 [2024-11-18 04:05:26.356548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.730 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.990 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.990 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.990 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.990 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.990 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.990 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.990 "name": "raid_bdev1", 00:16:29.990 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:29.990 "strip_size_kb": 64, 00:16:29.990 "state": "configuring", 00:16:29.990 "raid_level": "raid5f", 00:16:29.990 "superblock": true, 00:16:29.990 "num_base_bdevs": 4, 00:16:29.990 "num_base_bdevs_discovered": 1, 00:16:29.990 "num_base_bdevs_operational": 4, 00:16:29.990 "base_bdevs_list": [ 00:16:29.990 { 00:16:29.990 "name": "pt1", 00:16:29.990 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:29.990 "is_configured": true, 00:16:29.990 "data_offset": 2048, 00:16:29.990 "data_size": 63488 00:16:29.990 }, 00:16:29.990 { 00:16:29.990 "name": null, 00:16:29.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.990 "is_configured": false, 00:16:29.990 "data_offset": 0, 00:16:29.990 "data_size": 63488 00:16:29.990 }, 00:16:29.990 { 00:16:29.990 "name": null, 00:16:29.990 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.990 "is_configured": false, 00:16:29.990 "data_offset": 2048, 00:16:29.990 "data_size": 63488 00:16:29.990 }, 00:16:29.990 { 00:16:29.990 "name": null, 00:16:29.990 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:29.990 "is_configured": false, 00:16:29.990 "data_offset": 2048, 00:16:29.990 "data_size": 63488 00:16:29.990 } 00:16:29.990 ] 00:16:29.990 }' 00:16:29.990 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.990 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.251 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:30.251 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:30.251 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.251 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.251 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.251 [2024-11-18 04:05:26.839784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.251 [2024-11-18 04:05:26.839840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.252 [2024-11-18 04:05:26.839872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:30.252 [2024-11-18 04:05:26.839881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.252 [2024-11-18 04:05:26.840222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.252 [2024-11-18 04:05:26.840239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.252 [2024-11-18 04:05:26.840296] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:30.252 [2024-11-18 04:05:26.840318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.252 pt2 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.252 [2024-11-18 04:05:26.851765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:30.252 [2024-11-18 04:05:26.851852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.252 [2024-11-18 04:05:26.851883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:30.252 [2024-11-18 04:05:26.851909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.252 [2024-11-18 04:05:26.852264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.252 [2024-11-18 04:05:26.852327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:30.252 [2024-11-18 04:05:26.852408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:30.252 [2024-11-18 04:05:26.852450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:30.252 pt3 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.252 [2024-11-18 04:05:26.863725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:30.252 [2024-11-18 04:05:26.863817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.252 [2024-11-18 04:05:26.863860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:30.252 [2024-11-18 04:05:26.863886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.252 [2024-11-18 04:05:26.864237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.252 [2024-11-18 04:05:26.864288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:30.252 [2024-11-18 04:05:26.864364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:30.252 [2024-11-18 04:05:26.864405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:30.252 [2024-11-18 04:05:26.864538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:30.252 [2024-11-18 04:05:26.864574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:30.252 [2024-11-18 04:05:26.864805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:30.252 [2024-11-18 04:05:26.871370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:30.252 [2024-11-18 04:05:26.871422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raidpt4 00:16:30.252 _bdev 0x617000007e80 00:16:30.252 [2024-11-18 04:05:26.871622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.252 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.512 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.512 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.512 "name": "raid_bdev1", 00:16:30.512 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:30.512 "strip_size_kb": 64, 00:16:30.512 "state": "online", 00:16:30.512 "raid_level": "raid5f", 00:16:30.512 "superblock": true, 00:16:30.512 "num_base_bdevs": 4, 00:16:30.512 "num_base_bdevs_discovered": 4, 00:16:30.512 "num_base_bdevs_operational": 4, 00:16:30.512 "base_bdevs_list": [ 00:16:30.512 { 00:16:30.512 "name": "pt1", 00:16:30.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.512 "is_configured": true, 00:16:30.512 "data_offset": 2048, 00:16:30.512 "data_size": 63488 00:16:30.512 }, 00:16:30.512 { 00:16:30.512 "name": "pt2", 00:16:30.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.512 "is_configured": true, 00:16:30.512 "data_offset": 2048, 00:16:30.512 "data_size": 63488 00:16:30.512 }, 00:16:30.512 { 00:16:30.512 "name": "pt3", 00:16:30.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.512 "is_configured": true, 00:16:30.512 "data_offset": 2048, 00:16:30.512 "data_size": 63488 00:16:30.512 }, 00:16:30.512 { 00:16:30.512 "name": "pt4", 00:16:30.512 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.512 "is_configured": true, 00:16:30.512 "data_offset": 2048, 00:16:30.512 "data_size": 63488 00:16:30.512 } 00:16:30.512 ] 00:16:30.512 }' 00:16:30.512 04:05:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.512 04:05:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:30.772 [2024-11-18 04:05:27.318947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:30.772 "name": "raid_bdev1", 00:16:30.772 "aliases": [ 00:16:30.772 "ade01a2a-8712-4d8b-9730-6ec5140e74f5" 00:16:30.772 ], 00:16:30.772 "product_name": "Raid Volume", 00:16:30.772 "block_size": 512, 00:16:30.772 "num_blocks": 190464, 00:16:30.772 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:30.772 "assigned_rate_limits": { 00:16:30.772 "rw_ios_per_sec": 0, 00:16:30.772 "rw_mbytes_per_sec": 0, 00:16:30.772 "r_mbytes_per_sec": 0, 00:16:30.772 "w_mbytes_per_sec": 0 00:16:30.772 }, 00:16:30.772 "claimed": false, 00:16:30.772 "zoned": false, 00:16:30.772 "supported_io_types": { 00:16:30.772 "read": true, 00:16:30.772 "write": true, 00:16:30.772 "unmap": false, 00:16:30.772 "flush": false, 00:16:30.772 "reset": true, 00:16:30.772 "nvme_admin": false, 00:16:30.772 "nvme_io": false, 00:16:30.772 "nvme_io_md": false, 00:16:30.772 "write_zeroes": true, 00:16:30.772 "zcopy": false, 00:16:30.772 "get_zone_info": false, 00:16:30.772 "zone_management": false, 00:16:30.772 "zone_append": false, 00:16:30.772 "compare": false, 00:16:30.772 "compare_and_write": false, 00:16:30.772 "abort": false, 00:16:30.772 "seek_hole": false, 00:16:30.772 "seek_data": false, 00:16:30.772 "copy": false, 00:16:30.772 "nvme_iov_md": false 00:16:30.772 }, 00:16:30.772 "driver_specific": { 00:16:30.772 "raid": { 00:16:30.772 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:30.772 "strip_size_kb": 64, 00:16:30.772 "state": "online", 00:16:30.772 "raid_level": "raid5f", 00:16:30.772 "superblock": true, 00:16:30.772 "num_base_bdevs": 4, 00:16:30.772 "num_base_bdevs_discovered": 4, 00:16:30.772 "num_base_bdevs_operational": 4, 00:16:30.772 "base_bdevs_list": [ 00:16:30.772 { 00:16:30.772 "name": "pt1", 00:16:30.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.772 "is_configured": true, 00:16:30.772 "data_offset": 2048, 00:16:30.772 "data_size": 63488 00:16:30.772 }, 00:16:30.772 { 00:16:30.772 "name": "pt2", 00:16:30.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.772 "is_configured": true, 00:16:30.772 "data_offset": 2048, 00:16:30.772 "data_size": 63488 00:16:30.772 }, 00:16:30.772 { 00:16:30.772 "name": "pt3", 00:16:30.772 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.772 "is_configured": true, 00:16:30.772 "data_offset": 2048, 00:16:30.772 "data_size": 63488 00:16:30.772 }, 00:16:30.772 { 00:16:30.772 "name": "pt4", 00:16:30.772 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.772 "is_configured": true, 00:16:30.772 "data_offset": 2048, 00:16:30.772 "data_size": 63488 00:16:30.772 } 00:16:30.772 ] 00:16:30.772 } 00:16:30.772 } 00:16:30.772 }' 00:16:30.772 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:31.033 pt2 00:16:31.033 pt3 00:16:31.033 pt4' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.033 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:31.033 [2024-11-18 04:05:27.666292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ade01a2a-8712-4d8b-9730-6ec5140e74f5 '!=' ade01a2a-8712-4d8b-9730-6ec5140e74f5 ']' 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.294 [2024-11-18 04:05:27.714103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.294 "name": "raid_bdev1", 00:16:31.294 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:31.294 "strip_size_kb": 64, 00:16:31.294 "state": "online", 00:16:31.294 "raid_level": "raid5f", 00:16:31.294 "superblock": true, 00:16:31.294 "num_base_bdevs": 4, 00:16:31.294 "num_base_bdevs_discovered": 3, 00:16:31.294 "num_base_bdevs_operational": 3, 00:16:31.294 "base_bdevs_list": [ 00:16:31.294 { 00:16:31.294 "name": null, 00:16:31.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.294 "is_configured": false, 00:16:31.294 "data_offset": 0, 00:16:31.294 "data_size": 63488 00:16:31.294 }, 00:16:31.294 { 00:16:31.294 "name": "pt2", 00:16:31.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.294 "is_configured": true, 00:16:31.294 "data_offset": 2048, 00:16:31.294 "data_size": 63488 00:16:31.294 }, 00:16:31.294 { 00:16:31.294 "name": "pt3", 00:16:31.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.294 "is_configured": true, 00:16:31.294 "data_offset": 2048, 00:16:31.294 "data_size": 63488 00:16:31.294 }, 00:16:31.294 { 00:16:31.294 "name": "pt4", 00:16:31.294 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:31.294 "is_configured": true, 00:16:31.294 "data_offset": 2048, 00:16:31.294 "data_size": 63488 00:16:31.294 } 00:16:31.294 ] 00:16:31.294 }' 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.294 04:05:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.554 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:31.554 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.554 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.554 [2024-11-18 04:05:28.181263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.554 [2024-11-18 04:05:28.181330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.554 [2024-11-18 04:05:28.181406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.554 [2024-11-18 04:05:28.181500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.554 [2024-11-18 04:05:28.181542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:31.554 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.554 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.554 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:31.554 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.554 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.813 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.813 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.814 [2024-11-18 04:05:28.277098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:31.814 [2024-11-18 04:05:28.277177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.814 [2024-11-18 04:05:28.277209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:31.814 [2024-11-18 04:05:28.277256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.814 [2024-11-18 04:05:28.279342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.814 [2024-11-18 04:05:28.279409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:31.814 [2024-11-18 04:05:28.279497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:31.814 [2024-11-18 04:05:28.279568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.814 pt2 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.814 "name": "raid_bdev1", 00:16:31.814 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:31.814 "strip_size_kb": 64, 00:16:31.814 "state": "configuring", 00:16:31.814 "raid_level": "raid5f", 00:16:31.814 "superblock": true, 00:16:31.814 "num_base_bdevs": 4, 00:16:31.814 "num_base_bdevs_discovered": 1, 00:16:31.814 "num_base_bdevs_operational": 3, 00:16:31.814 "base_bdevs_list": [ 00:16:31.814 { 00:16:31.814 "name": null, 00:16:31.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.814 "is_configured": false, 00:16:31.814 "data_offset": 2048, 00:16:31.814 "data_size": 63488 00:16:31.814 }, 00:16:31.814 { 00:16:31.814 "name": "pt2", 00:16:31.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.814 "is_configured": true, 00:16:31.814 "data_offset": 2048, 00:16:31.814 "data_size": 63488 00:16:31.814 }, 00:16:31.814 { 00:16:31.814 "name": null, 00:16:31.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.814 "is_configured": false, 00:16:31.814 "data_offset": 2048, 00:16:31.814 "data_size": 63488 00:16:31.814 }, 00:16:31.814 { 00:16:31.814 "name": null, 00:16:31.814 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:31.814 "is_configured": false, 00:16:31.814 "data_offset": 2048, 00:16:31.814 "data_size": 63488 00:16:31.814 } 00:16:31.814 ] 00:16:31.814 }' 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.814 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.385 [2024-11-18 04:05:28.748284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:32.385 [2024-11-18 04:05:28.748365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.385 [2024-11-18 04:05:28.748399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:32.385 [2024-11-18 04:05:28.748425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.385 [2024-11-18 04:05:28.748808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.385 [2024-11-18 04:05:28.748874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:32.385 [2024-11-18 04:05:28.748960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:32.385 [2024-11-18 04:05:28.749012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:32.385 pt3 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.385 "name": "raid_bdev1", 00:16:32.385 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:32.385 "strip_size_kb": 64, 00:16:32.385 "state": "configuring", 00:16:32.385 "raid_level": "raid5f", 00:16:32.385 "superblock": true, 00:16:32.385 "num_base_bdevs": 4, 00:16:32.385 "num_base_bdevs_discovered": 2, 00:16:32.385 "num_base_bdevs_operational": 3, 00:16:32.385 "base_bdevs_list": [ 00:16:32.385 { 00:16:32.385 "name": null, 00:16:32.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.385 "is_configured": false, 00:16:32.385 "data_offset": 2048, 00:16:32.385 "data_size": 63488 00:16:32.385 }, 00:16:32.385 { 00:16:32.385 "name": "pt2", 00:16:32.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.385 "is_configured": true, 00:16:32.385 "data_offset": 2048, 00:16:32.385 "data_size": 63488 00:16:32.385 }, 00:16:32.385 { 00:16:32.385 "name": "pt3", 00:16:32.385 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.385 "is_configured": true, 00:16:32.385 "data_offset": 2048, 00:16:32.385 "data_size": 63488 00:16:32.385 }, 00:16:32.385 { 00:16:32.385 "name": null, 00:16:32.385 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.385 "is_configured": false, 00:16:32.385 "data_offset": 2048, 00:16:32.385 "data_size": 63488 00:16:32.385 } 00:16:32.385 ] 00:16:32.385 }' 00:16:32.385 04:05:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.386 04:05:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.645 [2024-11-18 04:05:29.247456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:32.645 [2024-11-18 04:05:29.247536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.645 [2024-11-18 04:05:29.247570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:32.645 [2024-11-18 04:05:29.247595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.645 [2024-11-18 04:05:29.247954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.645 [2024-11-18 04:05:29.248015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:32.645 [2024-11-18 04:05:29.248096] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:32.645 [2024-11-18 04:05:29.248140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:32.645 [2024-11-18 04:05:29.248270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:32.645 [2024-11-18 04:05:29.248305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:32.645 [2024-11-18 04:05:29.248547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:32.645 [2024-11-18 04:05:29.255565] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:32.645 [2024-11-18 04:05:29.255621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:32.645 [2024-11-18 04:05:29.255915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.645 pt4 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.645 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.905 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.905 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.905 "name": "raid_bdev1", 00:16:32.905 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:32.905 "strip_size_kb": 64, 00:16:32.905 "state": "online", 00:16:32.905 "raid_level": "raid5f", 00:16:32.905 "superblock": true, 00:16:32.905 "num_base_bdevs": 4, 00:16:32.905 "num_base_bdevs_discovered": 3, 00:16:32.905 "num_base_bdevs_operational": 3, 00:16:32.905 "base_bdevs_list": [ 00:16:32.905 { 00:16:32.905 "name": null, 00:16:32.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.905 "is_configured": false, 00:16:32.905 "data_offset": 2048, 00:16:32.905 "data_size": 63488 00:16:32.905 }, 00:16:32.905 { 00:16:32.905 "name": "pt2", 00:16:32.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.905 "is_configured": true, 00:16:32.905 "data_offset": 2048, 00:16:32.905 "data_size": 63488 00:16:32.905 }, 00:16:32.905 { 00:16:32.905 "name": "pt3", 00:16:32.905 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.905 "is_configured": true, 00:16:32.905 "data_offset": 2048, 00:16:32.905 "data_size": 63488 00:16:32.905 }, 00:16:32.905 { 00:16:32.905 "name": "pt4", 00:16:32.905 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.905 "is_configured": true, 00:16:32.905 "data_offset": 2048, 00:16:32.905 "data_size": 63488 00:16:32.905 } 00:16:32.905 ] 00:16:32.905 }' 00:16:32.905 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.905 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.165 [2024-11-18 04:05:29.699904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.165 [2024-11-18 04:05:29.699969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.165 [2024-11-18 04:05:29.700041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.165 [2024-11-18 04:05:29.700131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.165 [2024-11-18 04:05:29.700201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.165 [2024-11-18 04:05:29.771810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:33.165 [2024-11-18 04:05:29.771911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.165 [2024-11-18 04:05:29.771975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:33.165 [2024-11-18 04:05:29.772006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.165 [2024-11-18 04:05:29.774144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.165 [2024-11-18 04:05:29.774214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:33.165 [2024-11-18 04:05:29.774300] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:33.165 [2024-11-18 04:05:29.774381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:33.165 [2024-11-18 04:05:29.774527] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:33.165 [2024-11-18 04:05:29.774592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.165 [2024-11-18 04:05:29.774628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:33.165 [2024-11-18 04:05:29.774729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.165 [2024-11-18 04:05:29.774873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:33.165 pt1 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.165 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.425 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.425 "name": "raid_bdev1", 00:16:33.425 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:33.425 "strip_size_kb": 64, 00:16:33.425 "state": "configuring", 00:16:33.425 "raid_level": "raid5f", 00:16:33.425 "superblock": true, 00:16:33.425 "num_base_bdevs": 4, 00:16:33.425 "num_base_bdevs_discovered": 2, 00:16:33.425 "num_base_bdevs_operational": 3, 00:16:33.425 "base_bdevs_list": [ 00:16:33.425 { 00:16:33.425 "name": null, 00:16:33.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.425 "is_configured": false, 00:16:33.425 "data_offset": 2048, 00:16:33.425 "data_size": 63488 00:16:33.425 }, 00:16:33.425 { 00:16:33.425 "name": "pt2", 00:16:33.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.425 "is_configured": true, 00:16:33.425 "data_offset": 2048, 00:16:33.425 "data_size": 63488 00:16:33.425 }, 00:16:33.425 { 00:16:33.425 "name": "pt3", 00:16:33.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.425 "is_configured": true, 00:16:33.425 "data_offset": 2048, 00:16:33.425 "data_size": 63488 00:16:33.425 }, 00:16:33.425 { 00:16:33.425 "name": null, 00:16:33.425 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:33.425 "is_configured": false, 00:16:33.425 "data_offset": 2048, 00:16:33.425 "data_size": 63488 00:16:33.425 } 00:16:33.425 ] 00:16:33.425 }' 00:16:33.425 04:05:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.425 04:05:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.686 [2024-11-18 04:05:30.254986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:33.686 [2024-11-18 04:05:30.255070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.686 [2024-11-18 04:05:30.255124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:33.686 [2024-11-18 04:05:30.255151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.686 [2024-11-18 04:05:30.255519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.686 [2024-11-18 04:05:30.255572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:33.686 [2024-11-18 04:05:30.255657] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:33.686 [2024-11-18 04:05:30.255710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:33.686 [2024-11-18 04:05:30.255866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:33.686 [2024-11-18 04:05:30.255905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:33.686 [2024-11-18 04:05:30.256167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:33.686 [2024-11-18 04:05:30.263391] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:33.686 [2024-11-18 04:05:30.263447] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:33.686 [2024-11-18 04:05:30.263746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.686 pt4 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.686 "name": "raid_bdev1", 00:16:33.686 "uuid": "ade01a2a-8712-4d8b-9730-6ec5140e74f5", 00:16:33.686 "strip_size_kb": 64, 00:16:33.686 "state": "online", 00:16:33.686 "raid_level": "raid5f", 00:16:33.686 "superblock": true, 00:16:33.686 "num_base_bdevs": 4, 00:16:33.686 "num_base_bdevs_discovered": 3, 00:16:33.686 "num_base_bdevs_operational": 3, 00:16:33.686 "base_bdevs_list": [ 00:16:33.686 { 00:16:33.686 "name": null, 00:16:33.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.686 "is_configured": false, 00:16:33.686 "data_offset": 2048, 00:16:33.686 "data_size": 63488 00:16:33.686 }, 00:16:33.686 { 00:16:33.686 "name": "pt2", 00:16:33.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.686 "is_configured": true, 00:16:33.686 "data_offset": 2048, 00:16:33.686 "data_size": 63488 00:16:33.686 }, 00:16:33.686 { 00:16:33.686 "name": "pt3", 00:16:33.686 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.686 "is_configured": true, 00:16:33.686 "data_offset": 2048, 00:16:33.686 "data_size": 63488 00:16:33.686 }, 00:16:33.686 { 00:16:33.686 "name": "pt4", 00:16:33.686 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:33.686 "is_configured": true, 00:16:33.686 "data_offset": 2048, 00:16:33.686 "data_size": 63488 00:16:33.686 } 00:16:33.686 ] 00:16:33.686 }' 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.686 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.255 [2024-11-18 04:05:30.779979] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ade01a2a-8712-4d8b-9730-6ec5140e74f5 '!=' ade01a2a-8712-4d8b-9730-6ec5140e74f5 ']' 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83999 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83999 ']' 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83999 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83999 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:34.255 killing process with pid 83999 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83999' 00:16:34.255 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83999 00:16:34.256 [2024-11-18 04:05:30.860728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.256 [2024-11-18 04:05:30.860815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.256 [2024-11-18 04:05:30.860898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.256 [2024-11-18 04:05:30.860910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:34.256 04:05:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83999 00:16:34.825 [2024-11-18 04:05:31.230664] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.779 04:05:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:35.779 00:16:35.779 real 0m8.644s 00:16:35.779 user 0m13.692s 00:16:35.779 sys 0m1.628s 00:16:35.779 04:05:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.779 04:05:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.779 ************************************ 00:16:35.779 END TEST raid5f_superblock_test 00:16:35.779 ************************************ 00:16:35.779 04:05:32 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:35.779 04:05:32 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:35.779 04:05:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:35.779 04:05:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.779 04:05:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.779 ************************************ 00:16:35.779 START TEST raid5f_rebuild_test 00:16:35.779 ************************************ 00:16:35.779 04:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:35.779 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:35.779 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:35.779 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:35.779 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:35.779 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:35.779 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:35.779 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.779 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84488 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:35.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84488 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84488 ']' 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.780 04:05:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.064 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:36.064 Zero copy mechanism will not be used. 00:16:36.064 [2024-11-18 04:05:32.463931] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:36.064 [2024-11-18 04:05:32.464087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84488 ] 00:16:36.064 [2024-11-18 04:05:32.641491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.349 [2024-11-18 04:05:32.749072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.349 [2024-11-18 04:05:32.942034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.349 [2024-11-18 04:05:32.942089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.920 BaseBdev1_malloc 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.920 [2024-11-18 04:05:33.318313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:36.920 [2024-11-18 04:05:33.318440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.920 [2024-11-18 04:05:33.318482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:36.920 [2024-11-18 04:05:33.318531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.920 [2024-11-18 04:05:33.320555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.920 [2024-11-18 04:05:33.320657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:36.920 BaseBdev1 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.920 BaseBdev2_malloc 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.920 [2024-11-18 04:05:33.372728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:36.920 [2024-11-18 04:05:33.372853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.920 [2024-11-18 04:05:33.372895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:36.920 [2024-11-18 04:05:33.372936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.920 [2024-11-18 04:05:33.374962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.920 [2024-11-18 04:05:33.375030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:36.920 BaseBdev2 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.920 BaseBdev3_malloc 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.920 [2024-11-18 04:05:33.438191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:36.920 [2024-11-18 04:05:33.438293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.920 [2024-11-18 04:05:33.438330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:36.920 [2024-11-18 04:05:33.438358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.920 [2024-11-18 04:05:33.440375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.920 [2024-11-18 04:05:33.440412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:36.920 BaseBdev3 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.920 BaseBdev4_malloc 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.920 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.920 [2024-11-18 04:05:33.490438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:36.920 [2024-11-18 04:05:33.490540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.920 [2024-11-18 04:05:33.490574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:36.920 [2024-11-18 04:05:33.490602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.921 [2024-11-18 04:05:33.492554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.921 [2024-11-18 04:05:33.492644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:36.921 BaseBdev4 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.921 spare_malloc 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.921 spare_delay 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.921 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.921 [2024-11-18 04:05:33.554457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:36.921 [2024-11-18 04:05:33.554565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.921 [2024-11-18 04:05:33.554601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:36.921 [2024-11-18 04:05:33.554629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.921 [2024-11-18 04:05:33.556619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.921 [2024-11-18 04:05:33.556692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:36.921 spare 00:16:37.181 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.181 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:37.181 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.181 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.181 [2024-11-18 04:05:33.566486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.181 [2024-11-18 04:05:33.568258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.181 [2024-11-18 04:05:33.568353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.181 [2024-11-18 04:05:33.568421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:37.181 [2024-11-18 04:05:33.568552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:37.181 [2024-11-18 04:05:33.568598] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:37.181 [2024-11-18 04:05:33.568858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:37.181 [2024-11-18 04:05:33.576392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:37.181 [2024-11-18 04:05:33.576443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:37.181 [2024-11-18 04:05:33.576675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.181 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.181 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.182 "name": "raid_bdev1", 00:16:37.182 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:37.182 "strip_size_kb": 64, 00:16:37.182 "state": "online", 00:16:37.182 "raid_level": "raid5f", 00:16:37.182 "superblock": false, 00:16:37.182 "num_base_bdevs": 4, 00:16:37.182 "num_base_bdevs_discovered": 4, 00:16:37.182 "num_base_bdevs_operational": 4, 00:16:37.182 "base_bdevs_list": [ 00:16:37.182 { 00:16:37.182 "name": "BaseBdev1", 00:16:37.182 "uuid": "00c38217-1d1f-5680-9263-5782133c9312", 00:16:37.182 "is_configured": true, 00:16:37.182 "data_offset": 0, 00:16:37.182 "data_size": 65536 00:16:37.182 }, 00:16:37.182 { 00:16:37.182 "name": "BaseBdev2", 00:16:37.182 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:37.182 "is_configured": true, 00:16:37.182 "data_offset": 0, 00:16:37.182 "data_size": 65536 00:16:37.182 }, 00:16:37.182 { 00:16:37.182 "name": "BaseBdev3", 00:16:37.182 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:37.182 "is_configured": true, 00:16:37.182 "data_offset": 0, 00:16:37.182 "data_size": 65536 00:16:37.182 }, 00:16:37.182 { 00:16:37.182 "name": "BaseBdev4", 00:16:37.182 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:37.182 "is_configured": true, 00:16:37.182 "data_offset": 0, 00:16:37.182 "data_size": 65536 00:16:37.182 } 00:16:37.182 ] 00:16:37.182 }' 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.182 04:05:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.441 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.441 04:05:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:37.442 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.442 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 [2024-11-18 04:05:34.008357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.442 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.442 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:37.442 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.442 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.442 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.442 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:37.442 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:37.702 [2024-11-18 04:05:34.283738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:37.702 /dev/nbd0 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:37.702 1+0 records in 00:16:37.702 1+0 records out 00:16:37.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557885 s, 7.3 MB/s 00:16:37.702 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.963 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:37.963 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.963 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:37.963 04:05:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:37.963 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.963 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.963 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:37.963 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:37.963 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:37.963 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:38.534 512+0 records in 00:16:38.534 512+0 records out 00:16:38.534 100663296 bytes (101 MB, 96 MiB) copied, 0.538972 s, 187 MB/s 00:16:38.534 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:38.534 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:38.534 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:38.534 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:38.534 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:38.534 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.534 04:05:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:38.534 [2024-11-18 04:05:35.100948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.534 [2024-11-18 04:05:35.133525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.534 04:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.794 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.794 "name": "raid_bdev1", 00:16:38.794 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:38.794 "strip_size_kb": 64, 00:16:38.794 "state": "online", 00:16:38.794 "raid_level": "raid5f", 00:16:38.794 "superblock": false, 00:16:38.794 "num_base_bdevs": 4, 00:16:38.794 "num_base_bdevs_discovered": 3, 00:16:38.795 "num_base_bdevs_operational": 3, 00:16:38.795 "base_bdevs_list": [ 00:16:38.795 { 00:16:38.795 "name": null, 00:16:38.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.795 "is_configured": false, 00:16:38.795 "data_offset": 0, 00:16:38.795 "data_size": 65536 00:16:38.795 }, 00:16:38.795 { 00:16:38.795 "name": "BaseBdev2", 00:16:38.795 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:38.795 "is_configured": true, 00:16:38.795 "data_offset": 0, 00:16:38.795 "data_size": 65536 00:16:38.795 }, 00:16:38.795 { 00:16:38.795 "name": "BaseBdev3", 00:16:38.795 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:38.795 "is_configured": true, 00:16:38.795 "data_offset": 0, 00:16:38.795 "data_size": 65536 00:16:38.795 }, 00:16:38.795 { 00:16:38.795 "name": "BaseBdev4", 00:16:38.795 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:38.795 "is_configured": true, 00:16:38.795 "data_offset": 0, 00:16:38.795 "data_size": 65536 00:16:38.795 } 00:16:38.795 ] 00:16:38.795 }' 00:16:38.795 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.795 04:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.055 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.055 04:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.055 04:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.055 [2024-11-18 04:05:35.608849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.055 [2024-11-18 04:05:35.623742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:39.055 04:05:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.055 04:05:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:39.055 [2024-11-18 04:05:35.632685] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:39.995 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.995 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.995 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.995 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.995 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.255 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.255 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.255 04:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.255 04:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.255 04:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.255 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.255 "name": "raid_bdev1", 00:16:40.255 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:40.255 "strip_size_kb": 64, 00:16:40.255 "state": "online", 00:16:40.255 "raid_level": "raid5f", 00:16:40.255 "superblock": false, 00:16:40.255 "num_base_bdevs": 4, 00:16:40.255 "num_base_bdevs_discovered": 4, 00:16:40.255 "num_base_bdevs_operational": 4, 00:16:40.255 "process": { 00:16:40.255 "type": "rebuild", 00:16:40.255 "target": "spare", 00:16:40.255 "progress": { 00:16:40.255 "blocks": 19200, 00:16:40.255 "percent": 9 00:16:40.255 } 00:16:40.255 }, 00:16:40.255 "base_bdevs_list": [ 00:16:40.255 { 00:16:40.255 "name": "spare", 00:16:40.255 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:40.255 "is_configured": true, 00:16:40.255 "data_offset": 0, 00:16:40.255 "data_size": 65536 00:16:40.255 }, 00:16:40.255 { 00:16:40.255 "name": "BaseBdev2", 00:16:40.255 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:40.255 "is_configured": true, 00:16:40.255 "data_offset": 0, 00:16:40.255 "data_size": 65536 00:16:40.255 }, 00:16:40.255 { 00:16:40.255 "name": "BaseBdev3", 00:16:40.255 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:40.255 "is_configured": true, 00:16:40.255 "data_offset": 0, 00:16:40.255 "data_size": 65536 00:16:40.255 }, 00:16:40.255 { 00:16:40.255 "name": "BaseBdev4", 00:16:40.255 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:40.255 "is_configured": true, 00:16:40.255 "data_offset": 0, 00:16:40.255 "data_size": 65536 00:16:40.255 } 00:16:40.255 ] 00:16:40.255 }' 00:16:40.255 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.255 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.255 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.255 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.256 [2024-11-18 04:05:36.787225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.256 [2024-11-18 04:05:36.838138] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:40.256 [2024-11-18 04:05:36.838194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.256 [2024-11-18 04:05:36.838209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.256 [2024-11-18 04:05:36.838218] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.256 04:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.516 04:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.516 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.516 "name": "raid_bdev1", 00:16:40.516 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:40.516 "strip_size_kb": 64, 00:16:40.516 "state": "online", 00:16:40.516 "raid_level": "raid5f", 00:16:40.516 "superblock": false, 00:16:40.516 "num_base_bdevs": 4, 00:16:40.516 "num_base_bdevs_discovered": 3, 00:16:40.516 "num_base_bdevs_operational": 3, 00:16:40.516 "base_bdevs_list": [ 00:16:40.516 { 00:16:40.516 "name": null, 00:16:40.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.516 "is_configured": false, 00:16:40.516 "data_offset": 0, 00:16:40.516 "data_size": 65536 00:16:40.516 }, 00:16:40.516 { 00:16:40.516 "name": "BaseBdev2", 00:16:40.516 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:40.516 "is_configured": true, 00:16:40.516 "data_offset": 0, 00:16:40.516 "data_size": 65536 00:16:40.516 }, 00:16:40.516 { 00:16:40.516 "name": "BaseBdev3", 00:16:40.516 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:40.516 "is_configured": true, 00:16:40.516 "data_offset": 0, 00:16:40.516 "data_size": 65536 00:16:40.516 }, 00:16:40.516 { 00:16:40.516 "name": "BaseBdev4", 00:16:40.516 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:40.516 "is_configured": true, 00:16:40.516 "data_offset": 0, 00:16:40.516 "data_size": 65536 00:16:40.516 } 00:16:40.516 ] 00:16:40.516 }' 00:16:40.516 04:05:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.516 04:05:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.775 "name": "raid_bdev1", 00:16:40.775 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:40.775 "strip_size_kb": 64, 00:16:40.775 "state": "online", 00:16:40.775 "raid_level": "raid5f", 00:16:40.775 "superblock": false, 00:16:40.775 "num_base_bdevs": 4, 00:16:40.775 "num_base_bdevs_discovered": 3, 00:16:40.775 "num_base_bdevs_operational": 3, 00:16:40.775 "base_bdevs_list": [ 00:16:40.775 { 00:16:40.775 "name": null, 00:16:40.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.775 "is_configured": false, 00:16:40.775 "data_offset": 0, 00:16:40.775 "data_size": 65536 00:16:40.775 }, 00:16:40.775 { 00:16:40.775 "name": "BaseBdev2", 00:16:40.775 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:40.775 "is_configured": true, 00:16:40.775 "data_offset": 0, 00:16:40.775 "data_size": 65536 00:16:40.775 }, 00:16:40.775 { 00:16:40.775 "name": "BaseBdev3", 00:16:40.775 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:40.775 "is_configured": true, 00:16:40.775 "data_offset": 0, 00:16:40.775 "data_size": 65536 00:16:40.775 }, 00:16:40.775 { 00:16:40.775 "name": "BaseBdev4", 00:16:40.775 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:40.775 "is_configured": true, 00:16:40.775 "data_offset": 0, 00:16:40.775 "data_size": 65536 00:16:40.775 } 00:16:40.775 ] 00:16:40.775 }' 00:16:40.775 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.035 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.035 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.035 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.035 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:41.035 04:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.035 04:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.035 [2024-11-18 04:05:37.469964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.035 [2024-11-18 04:05:37.484205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:41.035 04:05:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.035 04:05:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:41.035 [2024-11-18 04:05:37.492478] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.975 "name": "raid_bdev1", 00:16:41.975 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:41.975 "strip_size_kb": 64, 00:16:41.975 "state": "online", 00:16:41.975 "raid_level": "raid5f", 00:16:41.975 "superblock": false, 00:16:41.975 "num_base_bdevs": 4, 00:16:41.975 "num_base_bdevs_discovered": 4, 00:16:41.975 "num_base_bdevs_operational": 4, 00:16:41.975 "process": { 00:16:41.975 "type": "rebuild", 00:16:41.975 "target": "spare", 00:16:41.975 "progress": { 00:16:41.975 "blocks": 19200, 00:16:41.975 "percent": 9 00:16:41.975 } 00:16:41.975 }, 00:16:41.975 "base_bdevs_list": [ 00:16:41.975 { 00:16:41.975 "name": "spare", 00:16:41.975 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:41.975 "is_configured": true, 00:16:41.975 "data_offset": 0, 00:16:41.975 "data_size": 65536 00:16:41.975 }, 00:16:41.975 { 00:16:41.975 "name": "BaseBdev2", 00:16:41.975 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:41.975 "is_configured": true, 00:16:41.975 "data_offset": 0, 00:16:41.975 "data_size": 65536 00:16:41.975 }, 00:16:41.975 { 00:16:41.975 "name": "BaseBdev3", 00:16:41.975 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:41.975 "is_configured": true, 00:16:41.975 "data_offset": 0, 00:16:41.975 "data_size": 65536 00:16:41.975 }, 00:16:41.975 { 00:16:41.975 "name": "BaseBdev4", 00:16:41.975 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:41.975 "is_configured": true, 00:16:41.975 "data_offset": 0, 00:16:41.975 "data_size": 65536 00:16:41.975 } 00:16:41.975 ] 00:16:41.975 }' 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.975 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=612 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.235 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.235 "name": "raid_bdev1", 00:16:42.235 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:42.235 "strip_size_kb": 64, 00:16:42.235 "state": "online", 00:16:42.235 "raid_level": "raid5f", 00:16:42.235 "superblock": false, 00:16:42.236 "num_base_bdevs": 4, 00:16:42.236 "num_base_bdevs_discovered": 4, 00:16:42.236 "num_base_bdevs_operational": 4, 00:16:42.236 "process": { 00:16:42.236 "type": "rebuild", 00:16:42.236 "target": "spare", 00:16:42.236 "progress": { 00:16:42.236 "blocks": 21120, 00:16:42.236 "percent": 10 00:16:42.236 } 00:16:42.236 }, 00:16:42.236 "base_bdevs_list": [ 00:16:42.236 { 00:16:42.236 "name": "spare", 00:16:42.236 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:42.236 "is_configured": true, 00:16:42.236 "data_offset": 0, 00:16:42.236 "data_size": 65536 00:16:42.236 }, 00:16:42.236 { 00:16:42.236 "name": "BaseBdev2", 00:16:42.236 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:42.236 "is_configured": true, 00:16:42.236 "data_offset": 0, 00:16:42.236 "data_size": 65536 00:16:42.236 }, 00:16:42.236 { 00:16:42.236 "name": "BaseBdev3", 00:16:42.236 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:42.236 "is_configured": true, 00:16:42.236 "data_offset": 0, 00:16:42.236 "data_size": 65536 00:16:42.236 }, 00:16:42.236 { 00:16:42.236 "name": "BaseBdev4", 00:16:42.236 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:42.236 "is_configured": true, 00:16:42.236 "data_offset": 0, 00:16:42.236 "data_size": 65536 00:16:42.236 } 00:16:42.236 ] 00:16:42.236 }' 00:16:42.236 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.236 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.236 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.236 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.236 04:05:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.174 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.174 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.174 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.174 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.174 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.174 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.174 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.174 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.174 04:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.174 04:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.174 04:05:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.435 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.435 "name": "raid_bdev1", 00:16:43.435 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:43.435 "strip_size_kb": 64, 00:16:43.435 "state": "online", 00:16:43.435 "raid_level": "raid5f", 00:16:43.435 "superblock": false, 00:16:43.435 "num_base_bdevs": 4, 00:16:43.435 "num_base_bdevs_discovered": 4, 00:16:43.435 "num_base_bdevs_operational": 4, 00:16:43.435 "process": { 00:16:43.435 "type": "rebuild", 00:16:43.435 "target": "spare", 00:16:43.435 "progress": { 00:16:43.435 "blocks": 42240, 00:16:43.435 "percent": 21 00:16:43.435 } 00:16:43.435 }, 00:16:43.435 "base_bdevs_list": [ 00:16:43.435 { 00:16:43.435 "name": "spare", 00:16:43.435 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:43.435 "is_configured": true, 00:16:43.435 "data_offset": 0, 00:16:43.435 "data_size": 65536 00:16:43.435 }, 00:16:43.435 { 00:16:43.435 "name": "BaseBdev2", 00:16:43.435 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:43.435 "is_configured": true, 00:16:43.435 "data_offset": 0, 00:16:43.435 "data_size": 65536 00:16:43.435 }, 00:16:43.435 { 00:16:43.435 "name": "BaseBdev3", 00:16:43.435 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:43.435 "is_configured": true, 00:16:43.435 "data_offset": 0, 00:16:43.435 "data_size": 65536 00:16:43.435 }, 00:16:43.435 { 00:16:43.435 "name": "BaseBdev4", 00:16:43.435 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:43.435 "is_configured": true, 00:16:43.435 "data_offset": 0, 00:16:43.435 "data_size": 65536 00:16:43.435 } 00:16:43.435 ] 00:16:43.435 }' 00:16:43.435 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.435 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.435 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.435 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.435 04:05:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:44.375 04:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.375 04:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.375 04:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.375 04:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.375 04:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.375 04:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.375 04:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.375 04:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.376 04:05:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.376 04:05:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.376 04:05:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.376 04:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.376 "name": "raid_bdev1", 00:16:44.376 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:44.376 "strip_size_kb": 64, 00:16:44.376 "state": "online", 00:16:44.376 "raid_level": "raid5f", 00:16:44.376 "superblock": false, 00:16:44.376 "num_base_bdevs": 4, 00:16:44.376 "num_base_bdevs_discovered": 4, 00:16:44.376 "num_base_bdevs_operational": 4, 00:16:44.376 "process": { 00:16:44.376 "type": "rebuild", 00:16:44.376 "target": "spare", 00:16:44.376 "progress": { 00:16:44.376 "blocks": 65280, 00:16:44.376 "percent": 33 00:16:44.376 } 00:16:44.376 }, 00:16:44.376 "base_bdevs_list": [ 00:16:44.376 { 00:16:44.376 "name": "spare", 00:16:44.376 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:44.376 "is_configured": true, 00:16:44.376 "data_offset": 0, 00:16:44.376 "data_size": 65536 00:16:44.376 }, 00:16:44.376 { 00:16:44.376 "name": "BaseBdev2", 00:16:44.376 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:44.376 "is_configured": true, 00:16:44.376 "data_offset": 0, 00:16:44.376 "data_size": 65536 00:16:44.376 }, 00:16:44.376 { 00:16:44.376 "name": "BaseBdev3", 00:16:44.376 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:44.376 "is_configured": true, 00:16:44.376 "data_offset": 0, 00:16:44.376 "data_size": 65536 00:16:44.376 }, 00:16:44.376 { 00:16:44.376 "name": "BaseBdev4", 00:16:44.376 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:44.376 "is_configured": true, 00:16:44.376 "data_offset": 0, 00:16:44.376 "data_size": 65536 00:16:44.376 } 00:16:44.376 ] 00:16:44.376 }' 00:16:44.376 04:05:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.636 04:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.636 04:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.636 04:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.636 04:05:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.576 "name": "raid_bdev1", 00:16:45.576 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:45.576 "strip_size_kb": 64, 00:16:45.576 "state": "online", 00:16:45.576 "raid_level": "raid5f", 00:16:45.576 "superblock": false, 00:16:45.576 "num_base_bdevs": 4, 00:16:45.576 "num_base_bdevs_discovered": 4, 00:16:45.576 "num_base_bdevs_operational": 4, 00:16:45.576 "process": { 00:16:45.576 "type": "rebuild", 00:16:45.576 "target": "spare", 00:16:45.576 "progress": { 00:16:45.576 "blocks": 86400, 00:16:45.576 "percent": 43 00:16:45.576 } 00:16:45.576 }, 00:16:45.576 "base_bdevs_list": [ 00:16:45.576 { 00:16:45.576 "name": "spare", 00:16:45.576 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:45.576 "is_configured": true, 00:16:45.576 "data_offset": 0, 00:16:45.576 "data_size": 65536 00:16:45.576 }, 00:16:45.576 { 00:16:45.576 "name": "BaseBdev2", 00:16:45.576 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:45.576 "is_configured": true, 00:16:45.576 "data_offset": 0, 00:16:45.576 "data_size": 65536 00:16:45.576 }, 00:16:45.576 { 00:16:45.576 "name": "BaseBdev3", 00:16:45.576 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:45.576 "is_configured": true, 00:16:45.576 "data_offset": 0, 00:16:45.576 "data_size": 65536 00:16:45.576 }, 00:16:45.576 { 00:16:45.576 "name": "BaseBdev4", 00:16:45.576 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:45.576 "is_configured": true, 00:16:45.576 "data_offset": 0, 00:16:45.576 "data_size": 65536 00:16:45.576 } 00:16:45.576 ] 00:16:45.576 }' 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.576 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.836 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.836 04:05:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.776 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.776 "name": "raid_bdev1", 00:16:46.776 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:46.776 "strip_size_kb": 64, 00:16:46.776 "state": "online", 00:16:46.776 "raid_level": "raid5f", 00:16:46.776 "superblock": false, 00:16:46.776 "num_base_bdevs": 4, 00:16:46.776 "num_base_bdevs_discovered": 4, 00:16:46.776 "num_base_bdevs_operational": 4, 00:16:46.776 "process": { 00:16:46.776 "type": "rebuild", 00:16:46.776 "target": "spare", 00:16:46.776 "progress": { 00:16:46.776 "blocks": 109440, 00:16:46.776 "percent": 55 00:16:46.776 } 00:16:46.777 }, 00:16:46.777 "base_bdevs_list": [ 00:16:46.777 { 00:16:46.777 "name": "spare", 00:16:46.777 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:46.777 "is_configured": true, 00:16:46.777 "data_offset": 0, 00:16:46.777 "data_size": 65536 00:16:46.777 }, 00:16:46.777 { 00:16:46.777 "name": "BaseBdev2", 00:16:46.777 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:46.777 "is_configured": true, 00:16:46.777 "data_offset": 0, 00:16:46.777 "data_size": 65536 00:16:46.777 }, 00:16:46.777 { 00:16:46.777 "name": "BaseBdev3", 00:16:46.777 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:46.777 "is_configured": true, 00:16:46.777 "data_offset": 0, 00:16:46.777 "data_size": 65536 00:16:46.777 }, 00:16:46.777 { 00:16:46.777 "name": "BaseBdev4", 00:16:46.777 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:46.777 "is_configured": true, 00:16:46.777 "data_offset": 0, 00:16:46.777 "data_size": 65536 00:16:46.777 } 00:16:46.777 ] 00:16:46.777 }' 00:16:46.777 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.777 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.777 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.777 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.777 04:05:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.159 "name": "raid_bdev1", 00:16:48.159 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:48.159 "strip_size_kb": 64, 00:16:48.159 "state": "online", 00:16:48.159 "raid_level": "raid5f", 00:16:48.159 "superblock": false, 00:16:48.159 "num_base_bdevs": 4, 00:16:48.159 "num_base_bdevs_discovered": 4, 00:16:48.159 "num_base_bdevs_operational": 4, 00:16:48.159 "process": { 00:16:48.159 "type": "rebuild", 00:16:48.159 "target": "spare", 00:16:48.159 "progress": { 00:16:48.159 "blocks": 130560, 00:16:48.159 "percent": 66 00:16:48.159 } 00:16:48.159 }, 00:16:48.159 "base_bdevs_list": [ 00:16:48.159 { 00:16:48.159 "name": "spare", 00:16:48.159 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:48.159 "is_configured": true, 00:16:48.159 "data_offset": 0, 00:16:48.159 "data_size": 65536 00:16:48.159 }, 00:16:48.159 { 00:16:48.159 "name": "BaseBdev2", 00:16:48.159 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:48.159 "is_configured": true, 00:16:48.159 "data_offset": 0, 00:16:48.159 "data_size": 65536 00:16:48.159 }, 00:16:48.159 { 00:16:48.159 "name": "BaseBdev3", 00:16:48.159 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:48.159 "is_configured": true, 00:16:48.159 "data_offset": 0, 00:16:48.159 "data_size": 65536 00:16:48.159 }, 00:16:48.159 { 00:16:48.159 "name": "BaseBdev4", 00:16:48.159 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:48.159 "is_configured": true, 00:16:48.159 "data_offset": 0, 00:16:48.159 "data_size": 65536 00:16:48.159 } 00:16:48.159 ] 00:16:48.159 }' 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.159 04:05:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.100 "name": "raid_bdev1", 00:16:49.100 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:49.100 "strip_size_kb": 64, 00:16:49.100 "state": "online", 00:16:49.100 "raid_level": "raid5f", 00:16:49.100 "superblock": false, 00:16:49.100 "num_base_bdevs": 4, 00:16:49.100 "num_base_bdevs_discovered": 4, 00:16:49.100 "num_base_bdevs_operational": 4, 00:16:49.100 "process": { 00:16:49.100 "type": "rebuild", 00:16:49.100 "target": "spare", 00:16:49.100 "progress": { 00:16:49.100 "blocks": 153600, 00:16:49.100 "percent": 78 00:16:49.100 } 00:16:49.100 }, 00:16:49.100 "base_bdevs_list": [ 00:16:49.100 { 00:16:49.100 "name": "spare", 00:16:49.100 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:49.100 "is_configured": true, 00:16:49.100 "data_offset": 0, 00:16:49.100 "data_size": 65536 00:16:49.100 }, 00:16:49.100 { 00:16:49.100 "name": "BaseBdev2", 00:16:49.100 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:49.100 "is_configured": true, 00:16:49.100 "data_offset": 0, 00:16:49.100 "data_size": 65536 00:16:49.100 }, 00:16:49.100 { 00:16:49.100 "name": "BaseBdev3", 00:16:49.100 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:49.100 "is_configured": true, 00:16:49.100 "data_offset": 0, 00:16:49.100 "data_size": 65536 00:16:49.100 }, 00:16:49.100 { 00:16:49.100 "name": "BaseBdev4", 00:16:49.100 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:49.100 "is_configured": true, 00:16:49.100 "data_offset": 0, 00:16:49.100 "data_size": 65536 00:16:49.100 } 00:16:49.100 ] 00:16:49.100 }' 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.100 04:05:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.484 "name": "raid_bdev1", 00:16:50.484 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:50.484 "strip_size_kb": 64, 00:16:50.484 "state": "online", 00:16:50.484 "raid_level": "raid5f", 00:16:50.484 "superblock": false, 00:16:50.484 "num_base_bdevs": 4, 00:16:50.484 "num_base_bdevs_discovered": 4, 00:16:50.484 "num_base_bdevs_operational": 4, 00:16:50.484 "process": { 00:16:50.484 "type": "rebuild", 00:16:50.484 "target": "spare", 00:16:50.484 "progress": { 00:16:50.484 "blocks": 174720, 00:16:50.484 "percent": 88 00:16:50.484 } 00:16:50.484 }, 00:16:50.484 "base_bdevs_list": [ 00:16:50.484 { 00:16:50.484 "name": "spare", 00:16:50.484 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:50.484 "is_configured": true, 00:16:50.484 "data_offset": 0, 00:16:50.484 "data_size": 65536 00:16:50.484 }, 00:16:50.484 { 00:16:50.484 "name": "BaseBdev2", 00:16:50.484 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:50.484 "is_configured": true, 00:16:50.484 "data_offset": 0, 00:16:50.484 "data_size": 65536 00:16:50.484 }, 00:16:50.484 { 00:16:50.484 "name": "BaseBdev3", 00:16:50.484 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:50.484 "is_configured": true, 00:16:50.484 "data_offset": 0, 00:16:50.484 "data_size": 65536 00:16:50.484 }, 00:16:50.484 { 00:16:50.484 "name": "BaseBdev4", 00:16:50.484 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:50.484 "is_configured": true, 00:16:50.484 "data_offset": 0, 00:16:50.484 "data_size": 65536 00:16:50.484 } 00:16:50.484 ] 00:16:50.484 }' 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.484 04:05:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.428 [2024-11-18 04:05:47.833734] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:51.428 [2024-11-18 04:05:47.833864] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:51.428 [2024-11-18 04:05:47.833949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.428 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.428 "name": "raid_bdev1", 00:16:51.428 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:51.428 "strip_size_kb": 64, 00:16:51.428 "state": "online", 00:16:51.429 "raid_level": "raid5f", 00:16:51.429 "superblock": false, 00:16:51.429 "num_base_bdevs": 4, 00:16:51.429 "num_base_bdevs_discovered": 4, 00:16:51.429 "num_base_bdevs_operational": 4, 00:16:51.429 "process": { 00:16:51.429 "type": "rebuild", 00:16:51.429 "target": "spare", 00:16:51.429 "progress": { 00:16:51.429 "blocks": 195840, 00:16:51.429 "percent": 99 00:16:51.429 } 00:16:51.429 }, 00:16:51.429 "base_bdevs_list": [ 00:16:51.429 { 00:16:51.429 "name": "spare", 00:16:51.429 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:51.429 "is_configured": true, 00:16:51.429 "data_offset": 0, 00:16:51.429 "data_size": 65536 00:16:51.429 }, 00:16:51.429 { 00:16:51.429 "name": "BaseBdev2", 00:16:51.429 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:51.429 "is_configured": true, 00:16:51.429 "data_offset": 0, 00:16:51.429 "data_size": 65536 00:16:51.429 }, 00:16:51.429 { 00:16:51.429 "name": "BaseBdev3", 00:16:51.429 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:51.429 "is_configured": true, 00:16:51.429 "data_offset": 0, 00:16:51.429 "data_size": 65536 00:16:51.429 }, 00:16:51.429 { 00:16:51.429 "name": "BaseBdev4", 00:16:51.429 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:51.429 "is_configured": true, 00:16:51.429 "data_offset": 0, 00:16:51.429 "data_size": 65536 00:16:51.429 } 00:16:51.429 ] 00:16:51.429 }' 00:16:51.429 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.429 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.429 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.429 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.429 04:05:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:52.382 04:05:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.382 04:05:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.382 04:05:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.383 04:05:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.383 04:05:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.383 04:05:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.383 04:05:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.383 04:05:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.383 04:05:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.383 04:05:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.383 04:05:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.659 "name": "raid_bdev1", 00:16:52.659 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:52.659 "strip_size_kb": 64, 00:16:52.659 "state": "online", 00:16:52.659 "raid_level": "raid5f", 00:16:52.659 "superblock": false, 00:16:52.659 "num_base_bdevs": 4, 00:16:52.659 "num_base_bdevs_discovered": 4, 00:16:52.659 "num_base_bdevs_operational": 4, 00:16:52.659 "base_bdevs_list": [ 00:16:52.659 { 00:16:52.659 "name": "spare", 00:16:52.659 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:52.659 "is_configured": true, 00:16:52.659 "data_offset": 0, 00:16:52.659 "data_size": 65536 00:16:52.659 }, 00:16:52.659 { 00:16:52.659 "name": "BaseBdev2", 00:16:52.659 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:52.659 "is_configured": true, 00:16:52.659 "data_offset": 0, 00:16:52.659 "data_size": 65536 00:16:52.659 }, 00:16:52.659 { 00:16:52.659 "name": "BaseBdev3", 00:16:52.659 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:52.659 "is_configured": true, 00:16:52.659 "data_offset": 0, 00:16:52.659 "data_size": 65536 00:16:52.659 }, 00:16:52.659 { 00:16:52.659 "name": "BaseBdev4", 00:16:52.659 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:52.659 "is_configured": true, 00:16:52.659 "data_offset": 0, 00:16:52.659 "data_size": 65536 00:16:52.659 } 00:16:52.659 ] 00:16:52.659 }' 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.659 "name": "raid_bdev1", 00:16:52.659 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:52.659 "strip_size_kb": 64, 00:16:52.659 "state": "online", 00:16:52.659 "raid_level": "raid5f", 00:16:52.659 "superblock": false, 00:16:52.659 "num_base_bdevs": 4, 00:16:52.659 "num_base_bdevs_discovered": 4, 00:16:52.659 "num_base_bdevs_operational": 4, 00:16:52.659 "base_bdevs_list": [ 00:16:52.659 { 00:16:52.659 "name": "spare", 00:16:52.659 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:52.659 "is_configured": true, 00:16:52.659 "data_offset": 0, 00:16:52.659 "data_size": 65536 00:16:52.659 }, 00:16:52.659 { 00:16:52.659 "name": "BaseBdev2", 00:16:52.659 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:52.659 "is_configured": true, 00:16:52.659 "data_offset": 0, 00:16:52.659 "data_size": 65536 00:16:52.659 }, 00:16:52.659 { 00:16:52.659 "name": "BaseBdev3", 00:16:52.659 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:52.659 "is_configured": true, 00:16:52.659 "data_offset": 0, 00:16:52.659 "data_size": 65536 00:16:52.659 }, 00:16:52.659 { 00:16:52.659 "name": "BaseBdev4", 00:16:52.659 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:52.659 "is_configured": true, 00:16:52.659 "data_offset": 0, 00:16:52.659 "data_size": 65536 00:16:52.659 } 00:16:52.659 ] 00:16:52.659 }' 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.659 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.660 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.920 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.920 "name": "raid_bdev1", 00:16:52.920 "uuid": "fb7d7166-bf6a-4828-9634-0583aace5782", 00:16:52.920 "strip_size_kb": 64, 00:16:52.920 "state": "online", 00:16:52.920 "raid_level": "raid5f", 00:16:52.920 "superblock": false, 00:16:52.920 "num_base_bdevs": 4, 00:16:52.920 "num_base_bdevs_discovered": 4, 00:16:52.920 "num_base_bdevs_operational": 4, 00:16:52.920 "base_bdevs_list": [ 00:16:52.920 { 00:16:52.920 "name": "spare", 00:16:52.920 "uuid": "4dbb2399-8160-572e-b96b-ab1d48c27cf8", 00:16:52.920 "is_configured": true, 00:16:52.920 "data_offset": 0, 00:16:52.920 "data_size": 65536 00:16:52.920 }, 00:16:52.920 { 00:16:52.920 "name": "BaseBdev2", 00:16:52.920 "uuid": "cf962685-cdea-5c57-a364-8575c828d72e", 00:16:52.920 "is_configured": true, 00:16:52.920 "data_offset": 0, 00:16:52.920 "data_size": 65536 00:16:52.920 }, 00:16:52.920 { 00:16:52.920 "name": "BaseBdev3", 00:16:52.920 "uuid": "36ec2558-3cdf-518f-b9e6-eb25a885e8dc", 00:16:52.920 "is_configured": true, 00:16:52.920 "data_offset": 0, 00:16:52.920 "data_size": 65536 00:16:52.920 }, 00:16:52.920 { 00:16:52.920 "name": "BaseBdev4", 00:16:52.920 "uuid": "2b716063-9f8e-5be6-bcc9-e8b78bb4a922", 00:16:52.920 "is_configured": true, 00:16:52.920 "data_offset": 0, 00:16:52.920 "data_size": 65536 00:16:52.920 } 00:16:52.920 ] 00:16:52.920 }' 00:16:52.920 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.920 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.180 [2024-11-18 04:05:49.760188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.180 [2024-11-18 04:05:49.760255] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.180 [2024-11-18 04:05:49.760335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.180 [2024-11-18 04:05:49.760439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.180 [2024-11-18 04:05:49.760449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:53.180 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:53.441 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:53.441 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:53.441 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:53.441 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:53.441 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:53.441 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:53.441 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:53.441 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:53.441 04:05:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:53.441 /dev/nbd0 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.441 1+0 records in 00:16:53.441 1+0 records out 00:16:53.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312633 s, 13.1 MB/s 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.441 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:53.442 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.442 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:53.442 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:53.442 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:53.442 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:53.702 /dev/nbd1 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.702 1+0 records in 00:16:53.702 1+0 records out 00:16:53.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408402 s, 10.0 MB/s 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:53.702 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:53.962 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:53.962 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:53.962 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:53.962 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:53.962 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:53.962 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.962 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:54.222 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:54.222 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:54.222 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:54.222 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:54.222 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:54.222 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:54.222 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:54.222 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:54.222 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:54.222 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84488 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84488 ']' 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84488 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84488 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.482 killing process with pid 84488 00:16:54.482 Received shutdown signal, test time was about 60.000000 seconds 00:16:54.482 00:16:54.482 Latency(us) 00:16:54.482 [2024-11-18T04:05:51.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.482 [2024-11-18T04:05:51.123Z] =================================================================================================================== 00:16:54.482 [2024-11-18T04:05:51.123Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84488' 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84488 00:16:54.482 [2024-11-18 04:05:50.985317] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.482 04:05:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84488 00:16:55.052 [2024-11-18 04:05:51.436647] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.993 04:05:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:55.993 ************************************ 00:16:55.993 END TEST raid5f_rebuild_test 00:16:55.993 ************************************ 00:16:55.993 00:16:55.993 real 0m20.102s 00:16:55.993 user 0m24.041s 00:16:55.993 sys 0m2.429s 00:16:55.993 04:05:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.993 04:05:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.993 04:05:52 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:55.994 04:05:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:55.994 04:05:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.994 04:05:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.994 ************************************ 00:16:55.994 START TEST raid5f_rebuild_test_sb 00:16:55.994 ************************************ 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85008 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85008 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85008 ']' 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.994 04:05:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.994 [2024-11-18 04:05:52.629059] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:55.994 [2024-11-18 04:05:52.629247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:55.994 Zero copy mechanism will not be used. 00:16:55.994 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85008 ] 00:16:56.255 [2024-11-18 04:05:52.803462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.515 [2024-11-18 04:05:52.906949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.515 [2024-11-18 04:05:53.104223] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.515 [2024-11-18 04:05:53.104315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.086 BaseBdev1_malloc 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.086 [2024-11-18 04:05:53.500563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:57.086 [2024-11-18 04:05:53.500699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.086 [2024-11-18 04:05:53.500743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:57.086 [2024-11-18 04:05:53.500793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.086 [2024-11-18 04:05:53.503051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.086 [2024-11-18 04:05:53.503120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:57.086 BaseBdev1 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.086 BaseBdev2_malloc 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.086 [2024-11-18 04:05:53.555104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:57.086 [2024-11-18 04:05:53.555197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.086 [2024-11-18 04:05:53.555245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:57.086 [2024-11-18 04:05:53.555276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.086 [2024-11-18 04:05:53.557281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.086 [2024-11-18 04:05:53.557352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:57.086 BaseBdev2 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.086 BaseBdev3_malloc 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.086 [2024-11-18 04:05:53.643257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:57.086 [2024-11-18 04:05:53.643376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.086 [2024-11-18 04:05:53.643415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:57.086 [2024-11-18 04:05:53.643472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.086 [2024-11-18 04:05:53.645545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.086 [2024-11-18 04:05:53.645619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:57.086 BaseBdev3 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.086 BaseBdev4_malloc 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.086 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.086 [2024-11-18 04:05:53.695869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:57.087 [2024-11-18 04:05:53.695970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.087 [2024-11-18 04:05:53.695991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:57.087 [2024-11-18 04:05:53.696009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.087 [2024-11-18 04:05:53.697990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.087 [2024-11-18 04:05:53.698028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:57.087 BaseBdev4 00:16:57.087 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.087 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:57.087 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.087 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.347 spare_malloc 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.347 spare_delay 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.347 [2024-11-18 04:05:53.761775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:57.347 [2024-11-18 04:05:53.761903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.347 [2024-11-18 04:05:53.761941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:57.347 [2024-11-18 04:05:53.761992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.347 [2024-11-18 04:05:53.763978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.347 [2024-11-18 04:05:53.764065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:57.347 spare 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.347 [2024-11-18 04:05:53.773809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.347 [2024-11-18 04:05:53.775455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.347 [2024-11-18 04:05:53.775516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.347 [2024-11-18 04:05:53.775565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:57.347 [2024-11-18 04:05:53.775741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:57.347 [2024-11-18 04:05:53.775756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:57.347 [2024-11-18 04:05:53.775994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:57.347 [2024-11-18 04:05:53.782677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:57.347 [2024-11-18 04:05:53.782727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:57.347 [2024-11-18 04:05:53.782968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.347 "name": "raid_bdev1", 00:16:57.347 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:16:57.347 "strip_size_kb": 64, 00:16:57.347 "state": "online", 00:16:57.347 "raid_level": "raid5f", 00:16:57.347 "superblock": true, 00:16:57.347 "num_base_bdevs": 4, 00:16:57.347 "num_base_bdevs_discovered": 4, 00:16:57.347 "num_base_bdevs_operational": 4, 00:16:57.347 "base_bdevs_list": [ 00:16:57.347 { 00:16:57.347 "name": "BaseBdev1", 00:16:57.347 "uuid": "e026c794-78cc-5a93-a120-d3c5c51cd074", 00:16:57.347 "is_configured": true, 00:16:57.347 "data_offset": 2048, 00:16:57.347 "data_size": 63488 00:16:57.347 }, 00:16:57.347 { 00:16:57.347 "name": "BaseBdev2", 00:16:57.347 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:16:57.347 "is_configured": true, 00:16:57.347 "data_offset": 2048, 00:16:57.347 "data_size": 63488 00:16:57.347 }, 00:16:57.347 { 00:16:57.347 "name": "BaseBdev3", 00:16:57.347 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:16:57.347 "is_configured": true, 00:16:57.347 "data_offset": 2048, 00:16:57.347 "data_size": 63488 00:16:57.347 }, 00:16:57.347 { 00:16:57.347 "name": "BaseBdev4", 00:16:57.347 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:16:57.347 "is_configured": true, 00:16:57.347 "data_offset": 2048, 00:16:57.347 "data_size": 63488 00:16:57.347 } 00:16:57.347 ] 00:16:57.347 }' 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.347 04:05:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.917 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.918 [2024-11-18 04:05:54.266545] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:57.918 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:57.918 [2024-11-18 04:05:54.505977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:57.918 /dev/nbd0 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.177 1+0 records in 00:16:58.177 1+0 records out 00:16:58.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326418 s, 12.5 MB/s 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:58.177 04:05:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:58.747 496+0 records in 00:16:58.747 496+0 records out 00:16:58.747 97517568 bytes (98 MB, 93 MiB) copied, 0.533849 s, 183 MB/s 00:16:58.747 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:58.747 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.747 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:58.747 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:58.747 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:58.748 [2024-11-18 04:05:55.343966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.748 [2024-11-18 04:05:55.368768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.748 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.008 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.008 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.008 "name": "raid_bdev1", 00:16:59.008 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:16:59.008 "strip_size_kb": 64, 00:16:59.008 "state": "online", 00:16:59.008 "raid_level": "raid5f", 00:16:59.008 "superblock": true, 00:16:59.008 "num_base_bdevs": 4, 00:16:59.008 "num_base_bdevs_discovered": 3, 00:16:59.008 "num_base_bdevs_operational": 3, 00:16:59.008 "base_bdevs_list": [ 00:16:59.008 { 00:16:59.008 "name": null, 00:16:59.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.008 "is_configured": false, 00:16:59.008 "data_offset": 0, 00:16:59.008 "data_size": 63488 00:16:59.008 }, 00:16:59.008 { 00:16:59.008 "name": "BaseBdev2", 00:16:59.008 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:16:59.008 "is_configured": true, 00:16:59.008 "data_offset": 2048, 00:16:59.008 "data_size": 63488 00:16:59.008 }, 00:16:59.008 { 00:16:59.008 "name": "BaseBdev3", 00:16:59.008 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:16:59.008 "is_configured": true, 00:16:59.008 "data_offset": 2048, 00:16:59.008 "data_size": 63488 00:16:59.008 }, 00:16:59.008 { 00:16:59.008 "name": "BaseBdev4", 00:16:59.008 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:16:59.008 "is_configured": true, 00:16:59.008 "data_offset": 2048, 00:16:59.008 "data_size": 63488 00:16:59.008 } 00:16:59.008 ] 00:16:59.008 }' 00:16:59.009 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.009 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.268 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:59.268 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.268 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.268 [2024-11-18 04:05:55.851926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:59.268 [2024-11-18 04:05:55.867160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:59.268 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.268 04:05:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:59.268 [2024-11-18 04:05:55.876188] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.650 "name": "raid_bdev1", 00:17:00.650 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:00.650 "strip_size_kb": 64, 00:17:00.650 "state": "online", 00:17:00.650 "raid_level": "raid5f", 00:17:00.650 "superblock": true, 00:17:00.650 "num_base_bdevs": 4, 00:17:00.650 "num_base_bdevs_discovered": 4, 00:17:00.650 "num_base_bdevs_operational": 4, 00:17:00.650 "process": { 00:17:00.650 "type": "rebuild", 00:17:00.650 "target": "spare", 00:17:00.650 "progress": { 00:17:00.650 "blocks": 19200, 00:17:00.650 "percent": 10 00:17:00.650 } 00:17:00.650 }, 00:17:00.650 "base_bdevs_list": [ 00:17:00.650 { 00:17:00.650 "name": "spare", 00:17:00.650 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:00.650 "is_configured": true, 00:17:00.650 "data_offset": 2048, 00:17:00.650 "data_size": 63488 00:17:00.650 }, 00:17:00.650 { 00:17:00.650 "name": "BaseBdev2", 00:17:00.650 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:00.650 "is_configured": true, 00:17:00.650 "data_offset": 2048, 00:17:00.650 "data_size": 63488 00:17:00.650 }, 00:17:00.650 { 00:17:00.650 "name": "BaseBdev3", 00:17:00.650 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:00.650 "is_configured": true, 00:17:00.650 "data_offset": 2048, 00:17:00.650 "data_size": 63488 00:17:00.650 }, 00:17:00.650 { 00:17:00.650 "name": "BaseBdev4", 00:17:00.650 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:00.650 "is_configured": true, 00:17:00.650 "data_offset": 2048, 00:17:00.650 "data_size": 63488 00:17:00.650 } 00:17:00.650 ] 00:17:00.650 }' 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.650 04:05:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.650 [2024-11-18 04:05:57.035195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:00.650 [2024-11-18 04:05:57.081561] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:00.650 [2024-11-18 04:05:57.081685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.650 [2024-11-18 04:05:57.081729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:00.650 [2024-11-18 04:05:57.081753] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.650 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.651 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.651 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.651 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.651 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.651 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.651 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.651 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.651 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.651 "name": "raid_bdev1", 00:17:00.651 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:00.651 "strip_size_kb": 64, 00:17:00.651 "state": "online", 00:17:00.651 "raid_level": "raid5f", 00:17:00.651 "superblock": true, 00:17:00.651 "num_base_bdevs": 4, 00:17:00.651 "num_base_bdevs_discovered": 3, 00:17:00.651 "num_base_bdevs_operational": 3, 00:17:00.651 "base_bdevs_list": [ 00:17:00.651 { 00:17:00.651 "name": null, 00:17:00.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.651 "is_configured": false, 00:17:00.651 "data_offset": 0, 00:17:00.651 "data_size": 63488 00:17:00.651 }, 00:17:00.651 { 00:17:00.651 "name": "BaseBdev2", 00:17:00.651 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:00.651 "is_configured": true, 00:17:00.651 "data_offset": 2048, 00:17:00.651 "data_size": 63488 00:17:00.651 }, 00:17:00.651 { 00:17:00.651 "name": "BaseBdev3", 00:17:00.651 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:00.651 "is_configured": true, 00:17:00.651 "data_offset": 2048, 00:17:00.651 "data_size": 63488 00:17:00.651 }, 00:17:00.651 { 00:17:00.651 "name": "BaseBdev4", 00:17:00.651 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:00.651 "is_configured": true, 00:17:00.651 "data_offset": 2048, 00:17:00.651 "data_size": 63488 00:17:00.651 } 00:17:00.651 ] 00:17:00.651 }' 00:17:00.651 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.651 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.911 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.911 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.911 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.911 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.911 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.911 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.911 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.911 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.911 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.172 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.172 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.172 "name": "raid_bdev1", 00:17:01.172 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:01.172 "strip_size_kb": 64, 00:17:01.172 "state": "online", 00:17:01.172 "raid_level": "raid5f", 00:17:01.172 "superblock": true, 00:17:01.172 "num_base_bdevs": 4, 00:17:01.172 "num_base_bdevs_discovered": 3, 00:17:01.172 "num_base_bdevs_operational": 3, 00:17:01.172 "base_bdevs_list": [ 00:17:01.172 { 00:17:01.172 "name": null, 00:17:01.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.172 "is_configured": false, 00:17:01.172 "data_offset": 0, 00:17:01.172 "data_size": 63488 00:17:01.172 }, 00:17:01.172 { 00:17:01.172 "name": "BaseBdev2", 00:17:01.172 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:01.172 "is_configured": true, 00:17:01.172 "data_offset": 2048, 00:17:01.172 "data_size": 63488 00:17:01.172 }, 00:17:01.172 { 00:17:01.172 "name": "BaseBdev3", 00:17:01.172 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:01.172 "is_configured": true, 00:17:01.172 "data_offset": 2048, 00:17:01.172 "data_size": 63488 00:17:01.172 }, 00:17:01.172 { 00:17:01.172 "name": "BaseBdev4", 00:17:01.172 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:01.172 "is_configured": true, 00:17:01.172 "data_offset": 2048, 00:17:01.172 "data_size": 63488 00:17:01.172 } 00:17:01.172 ] 00:17:01.172 }' 00:17:01.172 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.172 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.172 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.172 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.172 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:01.172 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.172 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.172 [2024-11-18 04:05:57.665438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:01.172 [2024-11-18 04:05:57.679533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:01.172 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.172 04:05:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:01.172 [2024-11-18 04:05:57.687783] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.114 "name": "raid_bdev1", 00:17:02.114 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:02.114 "strip_size_kb": 64, 00:17:02.114 "state": "online", 00:17:02.114 "raid_level": "raid5f", 00:17:02.114 "superblock": true, 00:17:02.114 "num_base_bdevs": 4, 00:17:02.114 "num_base_bdevs_discovered": 4, 00:17:02.114 "num_base_bdevs_operational": 4, 00:17:02.114 "process": { 00:17:02.114 "type": "rebuild", 00:17:02.114 "target": "spare", 00:17:02.114 "progress": { 00:17:02.114 "blocks": 19200, 00:17:02.114 "percent": 10 00:17:02.114 } 00:17:02.114 }, 00:17:02.114 "base_bdevs_list": [ 00:17:02.114 { 00:17:02.114 "name": "spare", 00:17:02.114 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:02.114 "is_configured": true, 00:17:02.114 "data_offset": 2048, 00:17:02.114 "data_size": 63488 00:17:02.114 }, 00:17:02.114 { 00:17:02.114 "name": "BaseBdev2", 00:17:02.114 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:02.114 "is_configured": true, 00:17:02.114 "data_offset": 2048, 00:17:02.114 "data_size": 63488 00:17:02.114 }, 00:17:02.114 { 00:17:02.114 "name": "BaseBdev3", 00:17:02.114 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:02.114 "is_configured": true, 00:17:02.114 "data_offset": 2048, 00:17:02.114 "data_size": 63488 00:17:02.114 }, 00:17:02.114 { 00:17:02.114 "name": "BaseBdev4", 00:17:02.114 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:02.114 "is_configured": true, 00:17:02.114 "data_offset": 2048, 00:17:02.114 "data_size": 63488 00:17:02.114 } 00:17:02.114 ] 00:17:02.114 }' 00:17:02.114 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:02.375 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=632 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.375 "name": "raid_bdev1", 00:17:02.375 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:02.375 "strip_size_kb": 64, 00:17:02.375 "state": "online", 00:17:02.375 "raid_level": "raid5f", 00:17:02.375 "superblock": true, 00:17:02.375 "num_base_bdevs": 4, 00:17:02.375 "num_base_bdevs_discovered": 4, 00:17:02.375 "num_base_bdevs_operational": 4, 00:17:02.375 "process": { 00:17:02.375 "type": "rebuild", 00:17:02.375 "target": "spare", 00:17:02.375 "progress": { 00:17:02.375 "blocks": 21120, 00:17:02.375 "percent": 11 00:17:02.375 } 00:17:02.375 }, 00:17:02.375 "base_bdevs_list": [ 00:17:02.375 { 00:17:02.375 "name": "spare", 00:17:02.375 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:02.375 "is_configured": true, 00:17:02.375 "data_offset": 2048, 00:17:02.375 "data_size": 63488 00:17:02.375 }, 00:17:02.375 { 00:17:02.375 "name": "BaseBdev2", 00:17:02.375 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:02.375 "is_configured": true, 00:17:02.375 "data_offset": 2048, 00:17:02.375 "data_size": 63488 00:17:02.375 }, 00:17:02.375 { 00:17:02.375 "name": "BaseBdev3", 00:17:02.375 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:02.375 "is_configured": true, 00:17:02.375 "data_offset": 2048, 00:17:02.375 "data_size": 63488 00:17:02.375 }, 00:17:02.375 { 00:17:02.375 "name": "BaseBdev4", 00:17:02.375 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:02.375 "is_configured": true, 00:17:02.375 "data_offset": 2048, 00:17:02.375 "data_size": 63488 00:17:02.375 } 00:17:02.375 ] 00:17:02.375 }' 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.375 04:05:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.789 04:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.789 04:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.789 04:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.789 04:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.789 04:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.789 04:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.789 04:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.789 04:05:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.789 04:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.789 04:05:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.789 04:06:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.789 04:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.789 "name": "raid_bdev1", 00:17:03.789 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:03.789 "strip_size_kb": 64, 00:17:03.789 "state": "online", 00:17:03.789 "raid_level": "raid5f", 00:17:03.789 "superblock": true, 00:17:03.789 "num_base_bdevs": 4, 00:17:03.789 "num_base_bdevs_discovered": 4, 00:17:03.789 "num_base_bdevs_operational": 4, 00:17:03.789 "process": { 00:17:03.789 "type": "rebuild", 00:17:03.789 "target": "spare", 00:17:03.789 "progress": { 00:17:03.789 "blocks": 44160, 00:17:03.789 "percent": 23 00:17:03.789 } 00:17:03.789 }, 00:17:03.789 "base_bdevs_list": [ 00:17:03.789 { 00:17:03.789 "name": "spare", 00:17:03.789 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:03.789 "is_configured": true, 00:17:03.790 "data_offset": 2048, 00:17:03.790 "data_size": 63488 00:17:03.790 }, 00:17:03.790 { 00:17:03.790 "name": "BaseBdev2", 00:17:03.790 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:03.790 "is_configured": true, 00:17:03.790 "data_offset": 2048, 00:17:03.790 "data_size": 63488 00:17:03.790 }, 00:17:03.790 { 00:17:03.790 "name": "BaseBdev3", 00:17:03.790 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:03.790 "is_configured": true, 00:17:03.790 "data_offset": 2048, 00:17:03.790 "data_size": 63488 00:17:03.790 }, 00:17:03.790 { 00:17:03.790 "name": "BaseBdev4", 00:17:03.790 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:03.790 "is_configured": true, 00:17:03.790 "data_offset": 2048, 00:17:03.790 "data_size": 63488 00:17:03.790 } 00:17:03.790 ] 00:17:03.790 }' 00:17:03.790 04:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.790 04:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.790 04:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.790 04:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.790 04:06:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.835 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.835 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.835 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.835 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.835 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.835 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.835 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.835 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.835 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.835 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.836 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.836 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.836 "name": "raid_bdev1", 00:17:04.836 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:04.836 "strip_size_kb": 64, 00:17:04.836 "state": "online", 00:17:04.836 "raid_level": "raid5f", 00:17:04.836 "superblock": true, 00:17:04.836 "num_base_bdevs": 4, 00:17:04.836 "num_base_bdevs_discovered": 4, 00:17:04.836 "num_base_bdevs_operational": 4, 00:17:04.836 "process": { 00:17:04.836 "type": "rebuild", 00:17:04.836 "target": "spare", 00:17:04.836 "progress": { 00:17:04.836 "blocks": 65280, 00:17:04.836 "percent": 34 00:17:04.836 } 00:17:04.836 }, 00:17:04.836 "base_bdevs_list": [ 00:17:04.836 { 00:17:04.836 "name": "spare", 00:17:04.836 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:04.836 "is_configured": true, 00:17:04.836 "data_offset": 2048, 00:17:04.836 "data_size": 63488 00:17:04.836 }, 00:17:04.836 { 00:17:04.836 "name": "BaseBdev2", 00:17:04.836 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:04.836 "is_configured": true, 00:17:04.836 "data_offset": 2048, 00:17:04.836 "data_size": 63488 00:17:04.836 }, 00:17:04.836 { 00:17:04.836 "name": "BaseBdev3", 00:17:04.836 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:04.836 "is_configured": true, 00:17:04.836 "data_offset": 2048, 00:17:04.836 "data_size": 63488 00:17:04.836 }, 00:17:04.836 { 00:17:04.836 "name": "BaseBdev4", 00:17:04.836 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:04.836 "is_configured": true, 00:17:04.836 "data_offset": 2048, 00:17:04.836 "data_size": 63488 00:17:04.836 } 00:17:04.836 ] 00:17:04.836 }' 00:17:04.836 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.836 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.836 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.836 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.836 04:06:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.779 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.779 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.779 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.780 "name": "raid_bdev1", 00:17:05.780 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:05.780 "strip_size_kb": 64, 00:17:05.780 "state": "online", 00:17:05.780 "raid_level": "raid5f", 00:17:05.780 "superblock": true, 00:17:05.780 "num_base_bdevs": 4, 00:17:05.780 "num_base_bdevs_discovered": 4, 00:17:05.780 "num_base_bdevs_operational": 4, 00:17:05.780 "process": { 00:17:05.780 "type": "rebuild", 00:17:05.780 "target": "spare", 00:17:05.780 "progress": { 00:17:05.780 "blocks": 86400, 00:17:05.780 "percent": 45 00:17:05.780 } 00:17:05.780 }, 00:17:05.780 "base_bdevs_list": [ 00:17:05.780 { 00:17:05.780 "name": "spare", 00:17:05.780 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:05.780 "is_configured": true, 00:17:05.780 "data_offset": 2048, 00:17:05.780 "data_size": 63488 00:17:05.780 }, 00:17:05.780 { 00:17:05.780 "name": "BaseBdev2", 00:17:05.780 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:05.780 "is_configured": true, 00:17:05.780 "data_offset": 2048, 00:17:05.780 "data_size": 63488 00:17:05.780 }, 00:17:05.780 { 00:17:05.780 "name": "BaseBdev3", 00:17:05.780 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:05.780 "is_configured": true, 00:17:05.780 "data_offset": 2048, 00:17:05.780 "data_size": 63488 00:17:05.780 }, 00:17:05.780 { 00:17:05.780 "name": "BaseBdev4", 00:17:05.780 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:05.780 "is_configured": true, 00:17:05.780 "data_offset": 2048, 00:17:05.780 "data_size": 63488 00:17:05.780 } 00:17:05.780 ] 00:17:05.780 }' 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.780 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.040 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.040 04:06:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.980 "name": "raid_bdev1", 00:17:06.980 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:06.980 "strip_size_kb": 64, 00:17:06.980 "state": "online", 00:17:06.980 "raid_level": "raid5f", 00:17:06.980 "superblock": true, 00:17:06.980 "num_base_bdevs": 4, 00:17:06.980 "num_base_bdevs_discovered": 4, 00:17:06.980 "num_base_bdevs_operational": 4, 00:17:06.980 "process": { 00:17:06.980 "type": "rebuild", 00:17:06.980 "target": "spare", 00:17:06.980 "progress": { 00:17:06.980 "blocks": 109440, 00:17:06.980 "percent": 57 00:17:06.980 } 00:17:06.980 }, 00:17:06.980 "base_bdevs_list": [ 00:17:06.980 { 00:17:06.980 "name": "spare", 00:17:06.980 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:06.980 "is_configured": true, 00:17:06.980 "data_offset": 2048, 00:17:06.980 "data_size": 63488 00:17:06.980 }, 00:17:06.980 { 00:17:06.980 "name": "BaseBdev2", 00:17:06.980 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:06.980 "is_configured": true, 00:17:06.980 "data_offset": 2048, 00:17:06.980 "data_size": 63488 00:17:06.980 }, 00:17:06.980 { 00:17:06.980 "name": "BaseBdev3", 00:17:06.980 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:06.980 "is_configured": true, 00:17:06.980 "data_offset": 2048, 00:17:06.980 "data_size": 63488 00:17:06.980 }, 00:17:06.980 { 00:17:06.980 "name": "BaseBdev4", 00:17:06.980 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:06.980 "is_configured": true, 00:17:06.980 "data_offset": 2048, 00:17:06.980 "data_size": 63488 00:17:06.980 } 00:17:06.980 ] 00:17:06.980 }' 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.980 04:06:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.364 "name": "raid_bdev1", 00:17:08.364 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:08.364 "strip_size_kb": 64, 00:17:08.364 "state": "online", 00:17:08.364 "raid_level": "raid5f", 00:17:08.364 "superblock": true, 00:17:08.364 "num_base_bdevs": 4, 00:17:08.364 "num_base_bdevs_discovered": 4, 00:17:08.364 "num_base_bdevs_operational": 4, 00:17:08.364 "process": { 00:17:08.364 "type": "rebuild", 00:17:08.364 "target": "spare", 00:17:08.364 "progress": { 00:17:08.364 "blocks": 130560, 00:17:08.364 "percent": 68 00:17:08.364 } 00:17:08.364 }, 00:17:08.364 "base_bdevs_list": [ 00:17:08.364 { 00:17:08.364 "name": "spare", 00:17:08.364 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:08.364 "is_configured": true, 00:17:08.364 "data_offset": 2048, 00:17:08.364 "data_size": 63488 00:17:08.364 }, 00:17:08.364 { 00:17:08.364 "name": "BaseBdev2", 00:17:08.364 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:08.364 "is_configured": true, 00:17:08.364 "data_offset": 2048, 00:17:08.364 "data_size": 63488 00:17:08.364 }, 00:17:08.364 { 00:17:08.364 "name": "BaseBdev3", 00:17:08.364 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:08.364 "is_configured": true, 00:17:08.364 "data_offset": 2048, 00:17:08.364 "data_size": 63488 00:17:08.364 }, 00:17:08.364 { 00:17:08.364 "name": "BaseBdev4", 00:17:08.364 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:08.364 "is_configured": true, 00:17:08.364 "data_offset": 2048, 00:17:08.364 "data_size": 63488 00:17:08.364 } 00:17:08.364 ] 00:17:08.364 }' 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.364 04:06:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.305 "name": "raid_bdev1", 00:17:09.305 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:09.305 "strip_size_kb": 64, 00:17:09.305 "state": "online", 00:17:09.305 "raid_level": "raid5f", 00:17:09.305 "superblock": true, 00:17:09.305 "num_base_bdevs": 4, 00:17:09.305 "num_base_bdevs_discovered": 4, 00:17:09.305 "num_base_bdevs_operational": 4, 00:17:09.305 "process": { 00:17:09.305 "type": "rebuild", 00:17:09.305 "target": "spare", 00:17:09.305 "progress": { 00:17:09.305 "blocks": 153600, 00:17:09.305 "percent": 80 00:17:09.305 } 00:17:09.305 }, 00:17:09.305 "base_bdevs_list": [ 00:17:09.305 { 00:17:09.305 "name": "spare", 00:17:09.305 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:09.305 "is_configured": true, 00:17:09.305 "data_offset": 2048, 00:17:09.305 "data_size": 63488 00:17:09.305 }, 00:17:09.305 { 00:17:09.305 "name": "BaseBdev2", 00:17:09.305 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:09.305 "is_configured": true, 00:17:09.305 "data_offset": 2048, 00:17:09.305 "data_size": 63488 00:17:09.305 }, 00:17:09.305 { 00:17:09.305 "name": "BaseBdev3", 00:17:09.305 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:09.305 "is_configured": true, 00:17:09.305 "data_offset": 2048, 00:17:09.305 "data_size": 63488 00:17:09.305 }, 00:17:09.305 { 00:17:09.305 "name": "BaseBdev4", 00:17:09.305 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:09.305 "is_configured": true, 00:17:09.305 "data_offset": 2048, 00:17:09.305 "data_size": 63488 00:17:09.305 } 00:17:09.305 ] 00:17:09.305 }' 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.305 04:06:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.689 "name": "raid_bdev1", 00:17:10.689 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:10.689 "strip_size_kb": 64, 00:17:10.689 "state": "online", 00:17:10.689 "raid_level": "raid5f", 00:17:10.689 "superblock": true, 00:17:10.689 "num_base_bdevs": 4, 00:17:10.689 "num_base_bdevs_discovered": 4, 00:17:10.689 "num_base_bdevs_operational": 4, 00:17:10.689 "process": { 00:17:10.689 "type": "rebuild", 00:17:10.689 "target": "spare", 00:17:10.689 "progress": { 00:17:10.689 "blocks": 174720, 00:17:10.689 "percent": 91 00:17:10.689 } 00:17:10.689 }, 00:17:10.689 "base_bdevs_list": [ 00:17:10.689 { 00:17:10.689 "name": "spare", 00:17:10.689 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:10.689 "is_configured": true, 00:17:10.689 "data_offset": 2048, 00:17:10.689 "data_size": 63488 00:17:10.689 }, 00:17:10.689 { 00:17:10.689 "name": "BaseBdev2", 00:17:10.689 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:10.689 "is_configured": true, 00:17:10.689 "data_offset": 2048, 00:17:10.689 "data_size": 63488 00:17:10.689 }, 00:17:10.689 { 00:17:10.689 "name": "BaseBdev3", 00:17:10.689 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:10.689 "is_configured": true, 00:17:10.689 "data_offset": 2048, 00:17:10.689 "data_size": 63488 00:17:10.689 }, 00:17:10.689 { 00:17:10.689 "name": "BaseBdev4", 00:17:10.689 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:10.689 "is_configured": true, 00:17:10.689 "data_offset": 2048, 00:17:10.689 "data_size": 63488 00:17:10.689 } 00:17:10.689 ] 00:17:10.689 }' 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.689 04:06:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.689 04:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.689 04:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.689 04:06:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.260 [2024-11-18 04:06:07.727609] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:11.260 [2024-11-18 04:06:07.727730] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:11.260 [2024-11-18 04:06:07.727890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.521 "name": "raid_bdev1", 00:17:11.521 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:11.521 "strip_size_kb": 64, 00:17:11.521 "state": "online", 00:17:11.521 "raid_level": "raid5f", 00:17:11.521 "superblock": true, 00:17:11.521 "num_base_bdevs": 4, 00:17:11.521 "num_base_bdevs_discovered": 4, 00:17:11.521 "num_base_bdevs_operational": 4, 00:17:11.521 "base_bdevs_list": [ 00:17:11.521 { 00:17:11.521 "name": "spare", 00:17:11.521 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:11.521 "is_configured": true, 00:17:11.521 "data_offset": 2048, 00:17:11.521 "data_size": 63488 00:17:11.521 }, 00:17:11.521 { 00:17:11.521 "name": "BaseBdev2", 00:17:11.521 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:11.521 "is_configured": true, 00:17:11.521 "data_offset": 2048, 00:17:11.521 "data_size": 63488 00:17:11.521 }, 00:17:11.521 { 00:17:11.521 "name": "BaseBdev3", 00:17:11.521 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:11.521 "is_configured": true, 00:17:11.521 "data_offset": 2048, 00:17:11.521 "data_size": 63488 00:17:11.521 }, 00:17:11.521 { 00:17:11.521 "name": "BaseBdev4", 00:17:11.521 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:11.521 "is_configured": true, 00:17:11.521 "data_offset": 2048, 00:17:11.521 "data_size": 63488 00:17:11.521 } 00:17:11.521 ] 00:17:11.521 }' 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:11.521 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.782 "name": "raid_bdev1", 00:17:11.782 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:11.782 "strip_size_kb": 64, 00:17:11.782 "state": "online", 00:17:11.782 "raid_level": "raid5f", 00:17:11.782 "superblock": true, 00:17:11.782 "num_base_bdevs": 4, 00:17:11.782 "num_base_bdevs_discovered": 4, 00:17:11.782 "num_base_bdevs_operational": 4, 00:17:11.782 "base_bdevs_list": [ 00:17:11.782 { 00:17:11.782 "name": "spare", 00:17:11.782 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:11.782 "is_configured": true, 00:17:11.782 "data_offset": 2048, 00:17:11.782 "data_size": 63488 00:17:11.782 }, 00:17:11.782 { 00:17:11.782 "name": "BaseBdev2", 00:17:11.782 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:11.782 "is_configured": true, 00:17:11.782 "data_offset": 2048, 00:17:11.782 "data_size": 63488 00:17:11.782 }, 00:17:11.782 { 00:17:11.782 "name": "BaseBdev3", 00:17:11.782 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:11.782 "is_configured": true, 00:17:11.782 "data_offset": 2048, 00:17:11.782 "data_size": 63488 00:17:11.782 }, 00:17:11.782 { 00:17:11.782 "name": "BaseBdev4", 00:17:11.782 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:11.782 "is_configured": true, 00:17:11.782 "data_offset": 2048, 00:17:11.782 "data_size": 63488 00:17:11.782 } 00:17:11.782 ] 00:17:11.782 }' 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.782 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.782 "name": "raid_bdev1", 00:17:11.782 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:11.782 "strip_size_kb": 64, 00:17:11.782 "state": "online", 00:17:11.782 "raid_level": "raid5f", 00:17:11.782 "superblock": true, 00:17:11.782 "num_base_bdevs": 4, 00:17:11.782 "num_base_bdevs_discovered": 4, 00:17:11.783 "num_base_bdevs_operational": 4, 00:17:11.783 "base_bdevs_list": [ 00:17:11.783 { 00:17:11.783 "name": "spare", 00:17:11.783 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:11.783 "is_configured": true, 00:17:11.783 "data_offset": 2048, 00:17:11.783 "data_size": 63488 00:17:11.783 }, 00:17:11.783 { 00:17:11.783 "name": "BaseBdev2", 00:17:11.783 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:11.783 "is_configured": true, 00:17:11.783 "data_offset": 2048, 00:17:11.783 "data_size": 63488 00:17:11.783 }, 00:17:11.783 { 00:17:11.783 "name": "BaseBdev3", 00:17:11.783 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:11.783 "is_configured": true, 00:17:11.783 "data_offset": 2048, 00:17:11.783 "data_size": 63488 00:17:11.783 }, 00:17:11.783 { 00:17:11.783 "name": "BaseBdev4", 00:17:11.783 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:11.783 "is_configured": true, 00:17:11.783 "data_offset": 2048, 00:17:11.783 "data_size": 63488 00:17:11.783 } 00:17:11.783 ] 00:17:11.783 }' 00:17:11.783 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.783 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.352 [2024-11-18 04:06:08.779942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.352 [2024-11-18 04:06:08.779970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.352 [2024-11-18 04:06:08.780045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.352 [2024-11-18 04:06:08.780125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.352 [2024-11-18 04:06:08.780146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.352 04:06:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:12.613 /dev/nbd0 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.613 1+0 records in 00:17:12.613 1+0 records out 00:17:12.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601598 s, 6.8 MB/s 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.613 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:12.873 /dev/nbd1 00:17:12.873 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:12.873 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:12.873 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:12.873 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:12.873 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.873 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.873 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:12.873 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:12.873 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.873 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.873 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.873 1+0 records in 00:17:12.873 1+0 records out 00:17:12.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044676 s, 9.2 MB/s 00:17:12.874 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.874 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:12.874 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.874 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.874 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:12.874 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.874 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.874 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.134 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:13.394 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:13.394 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:13.394 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:13.394 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.394 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.394 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:13.394 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:13.394 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.394 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:13.394 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:13.394 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.395 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.395 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.395 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:13.395 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.395 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.395 [2024-11-18 04:06:09.973008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:13.395 [2024-11-18 04:06:09.973079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.395 [2024-11-18 04:06:09.973105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:13.395 [2024-11-18 04:06:09.973114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.395 [2024-11-18 04:06:09.975203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.395 [2024-11-18 04:06:09.975236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:13.395 [2024-11-18 04:06:09.975311] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:13.395 [2024-11-18 04:06:09.975363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.395 [2024-11-18 04:06:09.975507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.395 [2024-11-18 04:06:09.975591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.395 [2024-11-18 04:06:09.975676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:13.395 spare 00:17:13.395 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.395 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:13.395 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.395 04:06:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.654 [2024-11-18 04:06:10.075582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:13.654 [2024-11-18 04:06:10.075612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:13.654 [2024-11-18 04:06:10.075904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:13.654 [2024-11-18 04:06:10.082914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:13.654 [2024-11-18 04:06:10.082936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:13.654 [2024-11-18 04:06:10.083107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.654 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.654 "name": "raid_bdev1", 00:17:13.654 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:13.654 "strip_size_kb": 64, 00:17:13.654 "state": "online", 00:17:13.654 "raid_level": "raid5f", 00:17:13.654 "superblock": true, 00:17:13.654 "num_base_bdevs": 4, 00:17:13.654 "num_base_bdevs_discovered": 4, 00:17:13.654 "num_base_bdevs_operational": 4, 00:17:13.654 "base_bdevs_list": [ 00:17:13.654 { 00:17:13.654 "name": "spare", 00:17:13.654 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:13.654 "is_configured": true, 00:17:13.654 "data_offset": 2048, 00:17:13.654 "data_size": 63488 00:17:13.654 }, 00:17:13.654 { 00:17:13.654 "name": "BaseBdev2", 00:17:13.654 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:13.654 "is_configured": true, 00:17:13.654 "data_offset": 2048, 00:17:13.654 "data_size": 63488 00:17:13.654 }, 00:17:13.654 { 00:17:13.654 "name": "BaseBdev3", 00:17:13.654 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:13.654 "is_configured": true, 00:17:13.654 "data_offset": 2048, 00:17:13.654 "data_size": 63488 00:17:13.654 }, 00:17:13.654 { 00:17:13.654 "name": "BaseBdev4", 00:17:13.654 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:13.654 "is_configured": true, 00:17:13.655 "data_offset": 2048, 00:17:13.655 "data_size": 63488 00:17:13.655 } 00:17:13.655 ] 00:17:13.655 }' 00:17:13.655 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.655 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.914 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.914 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.914 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.914 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.914 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.914 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.914 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.914 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.914 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.914 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.174 "name": "raid_bdev1", 00:17:14.174 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:14.174 "strip_size_kb": 64, 00:17:14.174 "state": "online", 00:17:14.174 "raid_level": "raid5f", 00:17:14.174 "superblock": true, 00:17:14.174 "num_base_bdevs": 4, 00:17:14.174 "num_base_bdevs_discovered": 4, 00:17:14.174 "num_base_bdevs_operational": 4, 00:17:14.174 "base_bdevs_list": [ 00:17:14.174 { 00:17:14.174 "name": "spare", 00:17:14.174 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:14.174 "is_configured": true, 00:17:14.174 "data_offset": 2048, 00:17:14.174 "data_size": 63488 00:17:14.174 }, 00:17:14.174 { 00:17:14.174 "name": "BaseBdev2", 00:17:14.174 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:14.174 "is_configured": true, 00:17:14.174 "data_offset": 2048, 00:17:14.174 "data_size": 63488 00:17:14.174 }, 00:17:14.174 { 00:17:14.174 "name": "BaseBdev3", 00:17:14.174 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:14.174 "is_configured": true, 00:17:14.174 "data_offset": 2048, 00:17:14.174 "data_size": 63488 00:17:14.174 }, 00:17:14.174 { 00:17:14.174 "name": "BaseBdev4", 00:17:14.174 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:14.174 "is_configured": true, 00:17:14.174 "data_offset": 2048, 00:17:14.174 "data_size": 63488 00:17:14.174 } 00:17:14.174 ] 00:17:14.174 }' 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.174 [2024-11-18 04:06:10.714117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.174 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.175 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.175 "name": "raid_bdev1", 00:17:14.175 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:14.175 "strip_size_kb": 64, 00:17:14.175 "state": "online", 00:17:14.175 "raid_level": "raid5f", 00:17:14.175 "superblock": true, 00:17:14.175 "num_base_bdevs": 4, 00:17:14.175 "num_base_bdevs_discovered": 3, 00:17:14.175 "num_base_bdevs_operational": 3, 00:17:14.175 "base_bdevs_list": [ 00:17:14.175 { 00:17:14.175 "name": null, 00:17:14.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.175 "is_configured": false, 00:17:14.175 "data_offset": 0, 00:17:14.175 "data_size": 63488 00:17:14.175 }, 00:17:14.175 { 00:17:14.175 "name": "BaseBdev2", 00:17:14.175 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:14.175 "is_configured": true, 00:17:14.175 "data_offset": 2048, 00:17:14.175 "data_size": 63488 00:17:14.175 }, 00:17:14.175 { 00:17:14.175 "name": "BaseBdev3", 00:17:14.175 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:14.175 "is_configured": true, 00:17:14.175 "data_offset": 2048, 00:17:14.175 "data_size": 63488 00:17:14.175 }, 00:17:14.175 { 00:17:14.175 "name": "BaseBdev4", 00:17:14.175 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:14.175 "is_configured": true, 00:17:14.175 "data_offset": 2048, 00:17:14.175 "data_size": 63488 00:17:14.175 } 00:17:14.175 ] 00:17:14.175 }' 00:17:14.175 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.175 04:06:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.746 04:06:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.746 04:06:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.746 04:06:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.746 [2024-11-18 04:06:11.165435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.746 [2024-11-18 04:06:11.165599] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.746 [2024-11-18 04:06:11.165623] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:14.746 [2024-11-18 04:06:11.165657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.746 [2024-11-18 04:06:11.180418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:14.746 04:06:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.746 04:06:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:14.746 [2024-11-18 04:06:11.189239] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.685 "name": "raid_bdev1", 00:17:15.685 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:15.685 "strip_size_kb": 64, 00:17:15.685 "state": "online", 00:17:15.685 "raid_level": "raid5f", 00:17:15.685 "superblock": true, 00:17:15.685 "num_base_bdevs": 4, 00:17:15.685 "num_base_bdevs_discovered": 4, 00:17:15.685 "num_base_bdevs_operational": 4, 00:17:15.685 "process": { 00:17:15.685 "type": "rebuild", 00:17:15.685 "target": "spare", 00:17:15.685 "progress": { 00:17:15.685 "blocks": 19200, 00:17:15.685 "percent": 10 00:17:15.685 } 00:17:15.685 }, 00:17:15.685 "base_bdevs_list": [ 00:17:15.685 { 00:17:15.685 "name": "spare", 00:17:15.685 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:15.685 "is_configured": true, 00:17:15.685 "data_offset": 2048, 00:17:15.685 "data_size": 63488 00:17:15.685 }, 00:17:15.685 { 00:17:15.685 "name": "BaseBdev2", 00:17:15.685 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:15.685 "is_configured": true, 00:17:15.685 "data_offset": 2048, 00:17:15.685 "data_size": 63488 00:17:15.685 }, 00:17:15.685 { 00:17:15.685 "name": "BaseBdev3", 00:17:15.685 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:15.685 "is_configured": true, 00:17:15.685 "data_offset": 2048, 00:17:15.685 "data_size": 63488 00:17:15.685 }, 00:17:15.685 { 00:17:15.685 "name": "BaseBdev4", 00:17:15.685 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:15.685 "is_configured": true, 00:17:15.685 "data_offset": 2048, 00:17:15.685 "data_size": 63488 00:17:15.685 } 00:17:15.685 ] 00:17:15.685 }' 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:15.685 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.945 [2024-11-18 04:06:12.339860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.945 [2024-11-18 04:06:12.394565] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:15.945 [2024-11-18 04:06:12.394641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.945 [2024-11-18 04:06:12.394657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.945 [2024-11-18 04:06:12.394666] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.945 "name": "raid_bdev1", 00:17:15.945 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:15.945 "strip_size_kb": 64, 00:17:15.945 "state": "online", 00:17:15.945 "raid_level": "raid5f", 00:17:15.945 "superblock": true, 00:17:15.945 "num_base_bdevs": 4, 00:17:15.945 "num_base_bdevs_discovered": 3, 00:17:15.945 "num_base_bdevs_operational": 3, 00:17:15.945 "base_bdevs_list": [ 00:17:15.945 { 00:17:15.945 "name": null, 00:17:15.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.945 "is_configured": false, 00:17:15.945 "data_offset": 0, 00:17:15.945 "data_size": 63488 00:17:15.945 }, 00:17:15.945 { 00:17:15.945 "name": "BaseBdev2", 00:17:15.945 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:15.945 "is_configured": true, 00:17:15.945 "data_offset": 2048, 00:17:15.945 "data_size": 63488 00:17:15.945 }, 00:17:15.945 { 00:17:15.945 "name": "BaseBdev3", 00:17:15.945 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:15.945 "is_configured": true, 00:17:15.945 "data_offset": 2048, 00:17:15.945 "data_size": 63488 00:17:15.945 }, 00:17:15.945 { 00:17:15.945 "name": "BaseBdev4", 00:17:15.945 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:15.945 "is_configured": true, 00:17:15.945 "data_offset": 2048, 00:17:15.945 "data_size": 63488 00:17:15.945 } 00:17:15.945 ] 00:17:15.945 }' 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.945 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.515 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.515 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.515 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.515 [2024-11-18 04:06:12.890729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.515 [2024-11-18 04:06:12.890783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.515 [2024-11-18 04:06:12.890810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:16.515 [2024-11-18 04:06:12.890822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.515 [2024-11-18 04:06:12.891291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.515 [2024-11-18 04:06:12.891312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.515 [2024-11-18 04:06:12.891392] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:16.515 [2024-11-18 04:06:12.891406] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:16.515 [2024-11-18 04:06:12.891415] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:16.515 [2024-11-18 04:06:12.891440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.515 [2024-11-18 04:06:12.905560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:16.515 spare 00:17:16.515 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.515 04:06:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:16.515 [2024-11-18 04:06:12.914212] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.455 "name": "raid_bdev1", 00:17:17.455 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:17.455 "strip_size_kb": 64, 00:17:17.455 "state": "online", 00:17:17.455 "raid_level": "raid5f", 00:17:17.455 "superblock": true, 00:17:17.455 "num_base_bdevs": 4, 00:17:17.455 "num_base_bdevs_discovered": 4, 00:17:17.455 "num_base_bdevs_operational": 4, 00:17:17.455 "process": { 00:17:17.455 "type": "rebuild", 00:17:17.455 "target": "spare", 00:17:17.455 "progress": { 00:17:17.455 "blocks": 19200, 00:17:17.455 "percent": 10 00:17:17.455 } 00:17:17.455 }, 00:17:17.455 "base_bdevs_list": [ 00:17:17.455 { 00:17:17.455 "name": "spare", 00:17:17.455 "uuid": "2c3d1985-6918-5200-99a4-286ddf6d84d7", 00:17:17.455 "is_configured": true, 00:17:17.455 "data_offset": 2048, 00:17:17.455 "data_size": 63488 00:17:17.455 }, 00:17:17.455 { 00:17:17.455 "name": "BaseBdev2", 00:17:17.455 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:17.455 "is_configured": true, 00:17:17.455 "data_offset": 2048, 00:17:17.455 "data_size": 63488 00:17:17.455 }, 00:17:17.455 { 00:17:17.455 "name": "BaseBdev3", 00:17:17.455 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:17.455 "is_configured": true, 00:17:17.455 "data_offset": 2048, 00:17:17.455 "data_size": 63488 00:17:17.455 }, 00:17:17.455 { 00:17:17.455 "name": "BaseBdev4", 00:17:17.455 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:17.455 "is_configured": true, 00:17:17.455 "data_offset": 2048, 00:17:17.455 "data_size": 63488 00:17:17.455 } 00:17:17.455 ] 00:17:17.455 }' 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.455 04:06:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.455 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.455 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.455 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.455 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.455 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.455 [2024-11-18 04:06:14.056900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.715 [2024-11-18 04:06:14.119621] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.715 [2024-11-18 04:06:14.119668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.715 [2024-11-18 04:06:14.119686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.715 [2024-11-18 04:06:14.119692] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.715 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.715 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:17.715 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.715 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.715 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.715 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.715 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.715 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.715 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.715 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.715 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.716 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.716 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.716 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.716 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.716 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.716 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.716 "name": "raid_bdev1", 00:17:17.716 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:17.716 "strip_size_kb": 64, 00:17:17.716 "state": "online", 00:17:17.716 "raid_level": "raid5f", 00:17:17.716 "superblock": true, 00:17:17.716 "num_base_bdevs": 4, 00:17:17.716 "num_base_bdevs_discovered": 3, 00:17:17.716 "num_base_bdevs_operational": 3, 00:17:17.716 "base_bdevs_list": [ 00:17:17.716 { 00:17:17.716 "name": null, 00:17:17.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.716 "is_configured": false, 00:17:17.716 "data_offset": 0, 00:17:17.716 "data_size": 63488 00:17:17.716 }, 00:17:17.716 { 00:17:17.716 "name": "BaseBdev2", 00:17:17.716 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:17.716 "is_configured": true, 00:17:17.716 "data_offset": 2048, 00:17:17.716 "data_size": 63488 00:17:17.716 }, 00:17:17.716 { 00:17:17.716 "name": "BaseBdev3", 00:17:17.716 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:17.716 "is_configured": true, 00:17:17.716 "data_offset": 2048, 00:17:17.716 "data_size": 63488 00:17:17.716 }, 00:17:17.716 { 00:17:17.716 "name": "BaseBdev4", 00:17:17.716 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:17.716 "is_configured": true, 00:17:17.716 "data_offset": 2048, 00:17:17.716 "data_size": 63488 00:17:17.716 } 00:17:17.716 ] 00:17:17.716 }' 00:17:17.716 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.716 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.975 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.975 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.975 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.975 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.975 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.975 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.975 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.975 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.975 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.235 "name": "raid_bdev1", 00:17:18.235 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:18.235 "strip_size_kb": 64, 00:17:18.235 "state": "online", 00:17:18.235 "raid_level": "raid5f", 00:17:18.235 "superblock": true, 00:17:18.235 "num_base_bdevs": 4, 00:17:18.235 "num_base_bdevs_discovered": 3, 00:17:18.235 "num_base_bdevs_operational": 3, 00:17:18.235 "base_bdevs_list": [ 00:17:18.235 { 00:17:18.235 "name": null, 00:17:18.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.235 "is_configured": false, 00:17:18.235 "data_offset": 0, 00:17:18.235 "data_size": 63488 00:17:18.235 }, 00:17:18.235 { 00:17:18.235 "name": "BaseBdev2", 00:17:18.235 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:18.235 "is_configured": true, 00:17:18.235 "data_offset": 2048, 00:17:18.235 "data_size": 63488 00:17:18.235 }, 00:17:18.235 { 00:17:18.235 "name": "BaseBdev3", 00:17:18.235 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:18.235 "is_configured": true, 00:17:18.235 "data_offset": 2048, 00:17:18.235 "data_size": 63488 00:17:18.235 }, 00:17:18.235 { 00:17:18.235 "name": "BaseBdev4", 00:17:18.235 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:18.235 "is_configured": true, 00:17:18.235 "data_offset": 2048, 00:17:18.235 "data_size": 63488 00:17:18.235 } 00:17:18.235 ] 00:17:18.235 }' 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.235 [2024-11-18 04:06:14.739049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:18.235 [2024-11-18 04:06:14.739097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.235 [2024-11-18 04:06:14.739118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:18.235 [2024-11-18 04:06:14.739126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.235 [2024-11-18 04:06:14.739551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.235 [2024-11-18 04:06:14.739568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:18.235 [2024-11-18 04:06:14.739640] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:18.235 [2024-11-18 04:06:14.739652] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:18.235 [2024-11-18 04:06:14.739663] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:18.235 [2024-11-18 04:06:14.739673] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:18.235 BaseBdev1 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.235 04:06:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:19.175 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:19.175 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.175 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.175 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.175 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.175 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:19.175 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.175 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.175 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.175 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.175 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.176 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.176 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.176 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.176 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.176 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.176 "name": "raid_bdev1", 00:17:19.176 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:19.176 "strip_size_kb": 64, 00:17:19.176 "state": "online", 00:17:19.176 "raid_level": "raid5f", 00:17:19.176 "superblock": true, 00:17:19.176 "num_base_bdevs": 4, 00:17:19.176 "num_base_bdevs_discovered": 3, 00:17:19.176 "num_base_bdevs_operational": 3, 00:17:19.176 "base_bdevs_list": [ 00:17:19.176 { 00:17:19.176 "name": null, 00:17:19.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.176 "is_configured": false, 00:17:19.176 "data_offset": 0, 00:17:19.176 "data_size": 63488 00:17:19.176 }, 00:17:19.176 { 00:17:19.176 "name": "BaseBdev2", 00:17:19.176 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:19.176 "is_configured": true, 00:17:19.176 "data_offset": 2048, 00:17:19.176 "data_size": 63488 00:17:19.176 }, 00:17:19.176 { 00:17:19.176 "name": "BaseBdev3", 00:17:19.176 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:19.176 "is_configured": true, 00:17:19.176 "data_offset": 2048, 00:17:19.176 "data_size": 63488 00:17:19.176 }, 00:17:19.176 { 00:17:19.176 "name": "BaseBdev4", 00:17:19.176 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:19.176 "is_configured": true, 00:17:19.176 "data_offset": 2048, 00:17:19.176 "data_size": 63488 00:17:19.176 } 00:17:19.176 ] 00:17:19.176 }' 00:17:19.176 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.176 04:06:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.746 "name": "raid_bdev1", 00:17:19.746 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:19.746 "strip_size_kb": 64, 00:17:19.746 "state": "online", 00:17:19.746 "raid_level": "raid5f", 00:17:19.746 "superblock": true, 00:17:19.746 "num_base_bdevs": 4, 00:17:19.746 "num_base_bdevs_discovered": 3, 00:17:19.746 "num_base_bdevs_operational": 3, 00:17:19.746 "base_bdevs_list": [ 00:17:19.746 { 00:17:19.746 "name": null, 00:17:19.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.746 "is_configured": false, 00:17:19.746 "data_offset": 0, 00:17:19.746 "data_size": 63488 00:17:19.746 }, 00:17:19.746 { 00:17:19.746 "name": "BaseBdev2", 00:17:19.746 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:19.746 "is_configured": true, 00:17:19.746 "data_offset": 2048, 00:17:19.746 "data_size": 63488 00:17:19.746 }, 00:17:19.746 { 00:17:19.746 "name": "BaseBdev3", 00:17:19.746 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:19.746 "is_configured": true, 00:17:19.746 "data_offset": 2048, 00:17:19.746 "data_size": 63488 00:17:19.746 }, 00:17:19.746 { 00:17:19.746 "name": "BaseBdev4", 00:17:19.746 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:19.746 "is_configured": true, 00:17:19.746 "data_offset": 2048, 00:17:19.746 "data_size": 63488 00:17:19.746 } 00:17:19.746 ] 00:17:19.746 }' 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.746 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.746 [2024-11-18 04:06:16.348310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.747 [2024-11-18 04:06:16.348477] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:19.747 [2024-11-18 04:06:16.348498] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:19.747 request: 00:17:19.747 { 00:17:19.747 "base_bdev": "BaseBdev1", 00:17:19.747 "raid_bdev": "raid_bdev1", 00:17:19.747 "method": "bdev_raid_add_base_bdev", 00:17:19.747 "req_id": 1 00:17:19.747 } 00:17:19.747 Got JSON-RPC error response 00:17:19.747 response: 00:17:19.747 { 00:17:19.747 "code": -22, 00:17:19.747 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:19.747 } 00:17:19.747 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.747 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:19.747 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.747 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.747 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.747 04:06:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.130 "name": "raid_bdev1", 00:17:21.130 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:21.130 "strip_size_kb": 64, 00:17:21.130 "state": "online", 00:17:21.130 "raid_level": "raid5f", 00:17:21.130 "superblock": true, 00:17:21.130 "num_base_bdevs": 4, 00:17:21.130 "num_base_bdevs_discovered": 3, 00:17:21.130 "num_base_bdevs_operational": 3, 00:17:21.130 "base_bdevs_list": [ 00:17:21.130 { 00:17:21.130 "name": null, 00:17:21.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.130 "is_configured": false, 00:17:21.130 "data_offset": 0, 00:17:21.130 "data_size": 63488 00:17:21.130 }, 00:17:21.130 { 00:17:21.130 "name": "BaseBdev2", 00:17:21.130 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:21.130 "is_configured": true, 00:17:21.130 "data_offset": 2048, 00:17:21.130 "data_size": 63488 00:17:21.130 }, 00:17:21.130 { 00:17:21.130 "name": "BaseBdev3", 00:17:21.130 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:21.130 "is_configured": true, 00:17:21.130 "data_offset": 2048, 00:17:21.130 "data_size": 63488 00:17:21.130 }, 00:17:21.130 { 00:17:21.130 "name": "BaseBdev4", 00:17:21.130 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:21.130 "is_configured": true, 00:17:21.130 "data_offset": 2048, 00:17:21.130 "data_size": 63488 00:17:21.130 } 00:17:21.130 ] 00:17:21.130 }' 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.130 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.390 "name": "raid_bdev1", 00:17:21.390 "uuid": "26509cef-5b45-4769-9e85-573e85ee8117", 00:17:21.390 "strip_size_kb": 64, 00:17:21.390 "state": "online", 00:17:21.390 "raid_level": "raid5f", 00:17:21.390 "superblock": true, 00:17:21.390 "num_base_bdevs": 4, 00:17:21.390 "num_base_bdevs_discovered": 3, 00:17:21.390 "num_base_bdevs_operational": 3, 00:17:21.390 "base_bdevs_list": [ 00:17:21.390 { 00:17:21.390 "name": null, 00:17:21.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.390 "is_configured": false, 00:17:21.390 "data_offset": 0, 00:17:21.390 "data_size": 63488 00:17:21.390 }, 00:17:21.390 { 00:17:21.390 "name": "BaseBdev2", 00:17:21.390 "uuid": "714bb279-75df-590f-adf9-7924ef72a0df", 00:17:21.390 "is_configured": true, 00:17:21.390 "data_offset": 2048, 00:17:21.390 "data_size": 63488 00:17:21.390 }, 00:17:21.390 { 00:17:21.390 "name": "BaseBdev3", 00:17:21.390 "uuid": "a48a6f89-df8f-535e-a6b4-18eb367afaa6", 00:17:21.390 "is_configured": true, 00:17:21.390 "data_offset": 2048, 00:17:21.390 "data_size": 63488 00:17:21.390 }, 00:17:21.390 { 00:17:21.390 "name": "BaseBdev4", 00:17:21.390 "uuid": "afb6024f-5115-53e8-a1eb-ee2650061286", 00:17:21.390 "is_configured": true, 00:17:21.390 "data_offset": 2048, 00:17:21.390 "data_size": 63488 00:17:21.390 } 00:17:21.390 ] 00:17:21.390 }' 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85008 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85008 ']' 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85008 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85008 00:17:21.390 killing process with pid 85008 00:17:21.390 Received shutdown signal, test time was about 60.000000 seconds 00:17:21.390 00:17:21.390 Latency(us) 00:17:21.390 [2024-11-18T04:06:18.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.390 [2024-11-18T04:06:18.031Z] =================================================================================================================== 00:17:21.390 [2024-11-18T04:06:18.031Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85008' 00:17:21.390 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85008 00:17:21.390 [2024-11-18 04:06:17.958461] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.390 [2024-11-18 04:06:17.958571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.391 04:06:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85008 00:17:21.391 [2024-11-18 04:06:17.958646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.391 [2024-11-18 04:06:17.958657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:21.962 [2024-11-18 04:06:18.416477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.902 04:06:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:22.902 00:17:22.902 real 0m26.903s 00:17:22.902 user 0m33.756s 00:17:22.902 sys 0m3.183s 00:17:22.902 04:06:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.902 04:06:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.902 ************************************ 00:17:22.902 END TEST raid5f_rebuild_test_sb 00:17:22.902 ************************************ 00:17:22.902 04:06:19 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:22.902 04:06:19 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:22.902 04:06:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:22.902 04:06:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.902 04:06:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.902 ************************************ 00:17:22.902 START TEST raid_state_function_test_sb_4k 00:17:22.902 ************************************ 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85824 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85824' 00:17:22.902 Process raid pid: 85824 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85824 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85824 ']' 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.902 04:06:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.162 [2024-11-18 04:06:19.618634] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:23.162 [2024-11-18 04:06:19.618753] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.162 [2024-11-18 04:06:19.799948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.423 [2024-11-18 04:06:19.906314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.682 [2024-11-18 04:06:20.102806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.683 [2024-11-18 04:06:20.102849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.943 [2024-11-18 04:06:20.422579] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.943 [2024-11-18 04:06:20.422636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.943 [2024-11-18 04:06:20.422647] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.943 [2024-11-18 04:06:20.422656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.943 "name": "Existed_Raid", 00:17:23.943 "uuid": "a50bcdc5-b59a-4e10-9268-b15c1d0e07c8", 00:17:23.943 "strip_size_kb": 0, 00:17:23.943 "state": "configuring", 00:17:23.943 "raid_level": "raid1", 00:17:23.943 "superblock": true, 00:17:23.943 "num_base_bdevs": 2, 00:17:23.943 "num_base_bdevs_discovered": 0, 00:17:23.943 "num_base_bdevs_operational": 2, 00:17:23.943 "base_bdevs_list": [ 00:17:23.943 { 00:17:23.943 "name": "BaseBdev1", 00:17:23.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.943 "is_configured": false, 00:17:23.943 "data_offset": 0, 00:17:23.943 "data_size": 0 00:17:23.943 }, 00:17:23.943 { 00:17:23.943 "name": "BaseBdev2", 00:17:23.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.943 "is_configured": false, 00:17:23.943 "data_offset": 0, 00:17:23.943 "data_size": 0 00:17:23.943 } 00:17:23.943 ] 00:17:23.943 }' 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.943 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.515 [2024-11-18 04:06:20.853820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.515 [2024-11-18 04:06:20.853856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.515 [2024-11-18 04:06:20.865802] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.515 [2024-11-18 04:06:20.865854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.515 [2024-11-18 04:06:20.865863] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.515 [2024-11-18 04:06:20.865873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.515 [2024-11-18 04:06:20.914373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.515 BaseBdev1 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.515 [ 00:17:24.515 { 00:17:24.515 "name": "BaseBdev1", 00:17:24.515 "aliases": [ 00:17:24.515 "af624446-ed39-47f9-a1df-7565dbba7ee0" 00:17:24.515 ], 00:17:24.515 "product_name": "Malloc disk", 00:17:24.515 "block_size": 4096, 00:17:24.515 "num_blocks": 8192, 00:17:24.515 "uuid": "af624446-ed39-47f9-a1df-7565dbba7ee0", 00:17:24.515 "assigned_rate_limits": { 00:17:24.515 "rw_ios_per_sec": 0, 00:17:24.515 "rw_mbytes_per_sec": 0, 00:17:24.515 "r_mbytes_per_sec": 0, 00:17:24.515 "w_mbytes_per_sec": 0 00:17:24.515 }, 00:17:24.515 "claimed": true, 00:17:24.515 "claim_type": "exclusive_write", 00:17:24.515 "zoned": false, 00:17:24.515 "supported_io_types": { 00:17:24.515 "read": true, 00:17:24.515 "write": true, 00:17:24.515 "unmap": true, 00:17:24.515 "flush": true, 00:17:24.515 "reset": true, 00:17:24.515 "nvme_admin": false, 00:17:24.515 "nvme_io": false, 00:17:24.515 "nvme_io_md": false, 00:17:24.515 "write_zeroes": true, 00:17:24.515 "zcopy": true, 00:17:24.515 "get_zone_info": false, 00:17:24.515 "zone_management": false, 00:17:24.515 "zone_append": false, 00:17:24.515 "compare": false, 00:17:24.515 "compare_and_write": false, 00:17:24.515 "abort": true, 00:17:24.515 "seek_hole": false, 00:17:24.515 "seek_data": false, 00:17:24.515 "copy": true, 00:17:24.515 "nvme_iov_md": false 00:17:24.515 }, 00:17:24.515 "memory_domains": [ 00:17:24.515 { 00:17:24.515 "dma_device_id": "system", 00:17:24.515 "dma_device_type": 1 00:17:24.515 }, 00:17:24.515 { 00:17:24.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.515 "dma_device_type": 2 00:17:24.515 } 00:17:24.515 ], 00:17:24.515 "driver_specific": {} 00:17:24.515 } 00:17:24.515 ] 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.515 "name": "Existed_Raid", 00:17:24.515 "uuid": "149b2396-2991-4dd1-b8e1-aed3bde6f072", 00:17:24.515 "strip_size_kb": 0, 00:17:24.515 "state": "configuring", 00:17:24.515 "raid_level": "raid1", 00:17:24.515 "superblock": true, 00:17:24.515 "num_base_bdevs": 2, 00:17:24.515 "num_base_bdevs_discovered": 1, 00:17:24.515 "num_base_bdevs_operational": 2, 00:17:24.515 "base_bdevs_list": [ 00:17:24.515 { 00:17:24.515 "name": "BaseBdev1", 00:17:24.515 "uuid": "af624446-ed39-47f9-a1df-7565dbba7ee0", 00:17:24.515 "is_configured": true, 00:17:24.515 "data_offset": 256, 00:17:24.515 "data_size": 7936 00:17:24.515 }, 00:17:24.515 { 00:17:24.515 "name": "BaseBdev2", 00:17:24.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.515 "is_configured": false, 00:17:24.515 "data_offset": 0, 00:17:24.515 "data_size": 0 00:17:24.515 } 00:17:24.515 ] 00:17:24.515 }' 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.515 04:06:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.775 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.775 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.775 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.775 [2024-11-18 04:06:21.405564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.775 [2024-11-18 04:06:21.405609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:24.775 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.775 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:24.775 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.775 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.036 [2024-11-18 04:06:21.417593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.036 [2024-11-18 04:06:21.419368] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.036 [2024-11-18 04:06:21.419411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.036 "name": "Existed_Raid", 00:17:25.036 "uuid": "3293010a-e8f2-4fa7-8b8c-44024bfb9223", 00:17:25.036 "strip_size_kb": 0, 00:17:25.036 "state": "configuring", 00:17:25.036 "raid_level": "raid1", 00:17:25.036 "superblock": true, 00:17:25.036 "num_base_bdevs": 2, 00:17:25.036 "num_base_bdevs_discovered": 1, 00:17:25.036 "num_base_bdevs_operational": 2, 00:17:25.036 "base_bdevs_list": [ 00:17:25.036 { 00:17:25.036 "name": "BaseBdev1", 00:17:25.036 "uuid": "af624446-ed39-47f9-a1df-7565dbba7ee0", 00:17:25.036 "is_configured": true, 00:17:25.036 "data_offset": 256, 00:17:25.036 "data_size": 7936 00:17:25.036 }, 00:17:25.036 { 00:17:25.036 "name": "BaseBdev2", 00:17:25.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.036 "is_configured": false, 00:17:25.036 "data_offset": 0, 00:17:25.036 "data_size": 0 00:17:25.036 } 00:17:25.036 ] 00:17:25.036 }' 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.036 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.296 [2024-11-18 04:06:21.911095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.296 [2024-11-18 04:06:21.911343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:25.296 [2024-11-18 04:06:21.911360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:25.296 [2024-11-18 04:06:21.911630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:25.296 [2024-11-18 04:06:21.911775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:25.296 [2024-11-18 04:06:21.911788] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:25.296 BaseBdev2 00:17:25.296 [2024-11-18 04:06:21.911945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.296 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.296 [ 00:17:25.296 { 00:17:25.296 "name": "BaseBdev2", 00:17:25.556 "aliases": [ 00:17:25.556 "162dafeb-c55a-46ff-b9b1-dcbd871e75d7" 00:17:25.556 ], 00:17:25.556 "product_name": "Malloc disk", 00:17:25.556 "block_size": 4096, 00:17:25.556 "num_blocks": 8192, 00:17:25.556 "uuid": "162dafeb-c55a-46ff-b9b1-dcbd871e75d7", 00:17:25.556 "assigned_rate_limits": { 00:17:25.556 "rw_ios_per_sec": 0, 00:17:25.556 "rw_mbytes_per_sec": 0, 00:17:25.556 "r_mbytes_per_sec": 0, 00:17:25.556 "w_mbytes_per_sec": 0 00:17:25.556 }, 00:17:25.556 "claimed": true, 00:17:25.556 "claim_type": "exclusive_write", 00:17:25.556 "zoned": false, 00:17:25.556 "supported_io_types": { 00:17:25.556 "read": true, 00:17:25.556 "write": true, 00:17:25.556 "unmap": true, 00:17:25.556 "flush": true, 00:17:25.556 "reset": true, 00:17:25.556 "nvme_admin": false, 00:17:25.556 "nvme_io": false, 00:17:25.556 "nvme_io_md": false, 00:17:25.556 "write_zeroes": true, 00:17:25.556 "zcopy": true, 00:17:25.556 "get_zone_info": false, 00:17:25.556 "zone_management": false, 00:17:25.556 "zone_append": false, 00:17:25.556 "compare": false, 00:17:25.556 "compare_and_write": false, 00:17:25.556 "abort": true, 00:17:25.556 "seek_hole": false, 00:17:25.556 "seek_data": false, 00:17:25.556 "copy": true, 00:17:25.556 "nvme_iov_md": false 00:17:25.556 }, 00:17:25.556 "memory_domains": [ 00:17:25.556 { 00:17:25.557 "dma_device_id": "system", 00:17:25.557 "dma_device_type": 1 00:17:25.557 }, 00:17:25.557 { 00:17:25.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.557 "dma_device_type": 2 00:17:25.557 } 00:17:25.557 ], 00:17:25.557 "driver_specific": {} 00:17:25.557 } 00:17:25.557 ] 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.557 04:06:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.557 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.557 "name": "Existed_Raid", 00:17:25.557 "uuid": "3293010a-e8f2-4fa7-8b8c-44024bfb9223", 00:17:25.557 "strip_size_kb": 0, 00:17:25.557 "state": "online", 00:17:25.557 "raid_level": "raid1", 00:17:25.557 "superblock": true, 00:17:25.557 "num_base_bdevs": 2, 00:17:25.557 "num_base_bdevs_discovered": 2, 00:17:25.557 "num_base_bdevs_operational": 2, 00:17:25.557 "base_bdevs_list": [ 00:17:25.557 { 00:17:25.557 "name": "BaseBdev1", 00:17:25.557 "uuid": "af624446-ed39-47f9-a1df-7565dbba7ee0", 00:17:25.557 "is_configured": true, 00:17:25.557 "data_offset": 256, 00:17:25.557 "data_size": 7936 00:17:25.557 }, 00:17:25.557 { 00:17:25.557 "name": "BaseBdev2", 00:17:25.557 "uuid": "162dafeb-c55a-46ff-b9b1-dcbd871e75d7", 00:17:25.557 "is_configured": true, 00:17:25.557 "data_offset": 256, 00:17:25.557 "data_size": 7936 00:17:25.557 } 00:17:25.557 ] 00:17:25.557 }' 00:17:25.557 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.557 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.817 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:25.817 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:25.817 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:25.817 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:25.817 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:25.817 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:25.817 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:25.817 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.817 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.817 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:25.817 [2024-11-18 04:06:22.446442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:26.077 "name": "Existed_Raid", 00:17:26.077 "aliases": [ 00:17:26.077 "3293010a-e8f2-4fa7-8b8c-44024bfb9223" 00:17:26.077 ], 00:17:26.077 "product_name": "Raid Volume", 00:17:26.077 "block_size": 4096, 00:17:26.077 "num_blocks": 7936, 00:17:26.077 "uuid": "3293010a-e8f2-4fa7-8b8c-44024bfb9223", 00:17:26.077 "assigned_rate_limits": { 00:17:26.077 "rw_ios_per_sec": 0, 00:17:26.077 "rw_mbytes_per_sec": 0, 00:17:26.077 "r_mbytes_per_sec": 0, 00:17:26.077 "w_mbytes_per_sec": 0 00:17:26.077 }, 00:17:26.077 "claimed": false, 00:17:26.077 "zoned": false, 00:17:26.077 "supported_io_types": { 00:17:26.077 "read": true, 00:17:26.077 "write": true, 00:17:26.077 "unmap": false, 00:17:26.077 "flush": false, 00:17:26.077 "reset": true, 00:17:26.077 "nvme_admin": false, 00:17:26.077 "nvme_io": false, 00:17:26.077 "nvme_io_md": false, 00:17:26.077 "write_zeroes": true, 00:17:26.077 "zcopy": false, 00:17:26.077 "get_zone_info": false, 00:17:26.077 "zone_management": false, 00:17:26.077 "zone_append": false, 00:17:26.077 "compare": false, 00:17:26.077 "compare_and_write": false, 00:17:26.077 "abort": false, 00:17:26.077 "seek_hole": false, 00:17:26.077 "seek_data": false, 00:17:26.077 "copy": false, 00:17:26.077 "nvme_iov_md": false 00:17:26.077 }, 00:17:26.077 "memory_domains": [ 00:17:26.077 { 00:17:26.077 "dma_device_id": "system", 00:17:26.077 "dma_device_type": 1 00:17:26.077 }, 00:17:26.077 { 00:17:26.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.077 "dma_device_type": 2 00:17:26.077 }, 00:17:26.077 { 00:17:26.077 "dma_device_id": "system", 00:17:26.077 "dma_device_type": 1 00:17:26.077 }, 00:17:26.077 { 00:17:26.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.077 "dma_device_type": 2 00:17:26.077 } 00:17:26.077 ], 00:17:26.077 "driver_specific": { 00:17:26.077 "raid": { 00:17:26.077 "uuid": "3293010a-e8f2-4fa7-8b8c-44024bfb9223", 00:17:26.077 "strip_size_kb": 0, 00:17:26.077 "state": "online", 00:17:26.077 "raid_level": "raid1", 00:17:26.077 "superblock": true, 00:17:26.077 "num_base_bdevs": 2, 00:17:26.077 "num_base_bdevs_discovered": 2, 00:17:26.077 "num_base_bdevs_operational": 2, 00:17:26.077 "base_bdevs_list": [ 00:17:26.077 { 00:17:26.077 "name": "BaseBdev1", 00:17:26.077 "uuid": "af624446-ed39-47f9-a1df-7565dbba7ee0", 00:17:26.077 "is_configured": true, 00:17:26.077 "data_offset": 256, 00:17:26.077 "data_size": 7936 00:17:26.077 }, 00:17:26.077 { 00:17:26.077 "name": "BaseBdev2", 00:17:26.077 "uuid": "162dafeb-c55a-46ff-b9b1-dcbd871e75d7", 00:17:26.077 "is_configured": true, 00:17:26.077 "data_offset": 256, 00:17:26.077 "data_size": 7936 00:17:26.077 } 00:17:26.077 ] 00:17:26.077 } 00:17:26.077 } 00:17:26.077 }' 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:26.077 BaseBdev2' 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.077 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.078 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.078 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.078 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:26.078 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:26.078 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:26.078 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.078 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.078 [2024-11-18 04:06:22.673893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.338 "name": "Existed_Raid", 00:17:26.338 "uuid": "3293010a-e8f2-4fa7-8b8c-44024bfb9223", 00:17:26.338 "strip_size_kb": 0, 00:17:26.338 "state": "online", 00:17:26.338 "raid_level": "raid1", 00:17:26.338 "superblock": true, 00:17:26.338 "num_base_bdevs": 2, 00:17:26.338 "num_base_bdevs_discovered": 1, 00:17:26.338 "num_base_bdevs_operational": 1, 00:17:26.338 "base_bdevs_list": [ 00:17:26.338 { 00:17:26.338 "name": null, 00:17:26.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.338 "is_configured": false, 00:17:26.338 "data_offset": 0, 00:17:26.338 "data_size": 7936 00:17:26.338 }, 00:17:26.338 { 00:17:26.338 "name": "BaseBdev2", 00:17:26.338 "uuid": "162dafeb-c55a-46ff-b9b1-dcbd871e75d7", 00:17:26.338 "is_configured": true, 00:17:26.338 "data_offset": 256, 00:17:26.338 "data_size": 7936 00:17:26.338 } 00:17:26.338 ] 00:17:26.338 }' 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.338 04:06:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.908 [2024-11-18 04:06:23.288773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:26.908 [2024-11-18 04:06:23.288907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.908 [2024-11-18 04:06:23.377463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.908 [2024-11-18 04:06:23.377513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.908 [2024-11-18 04:06:23.377524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85824 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85824 ']' 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85824 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85824 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.908 killing process with pid 85824 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85824' 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85824 00:17:26.908 [2024-11-18 04:06:23.455116] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:26.908 04:06:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85824 00:17:26.908 [2024-11-18 04:06:23.470997] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.291 04:06:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:28.291 00:17:28.291 real 0m4.980s 00:17:28.291 user 0m7.230s 00:17:28.291 sys 0m0.898s 00:17:28.291 04:06:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.291 04:06:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.291 ************************************ 00:17:28.291 END TEST raid_state_function_test_sb_4k 00:17:28.291 ************************************ 00:17:28.291 04:06:24 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:28.291 04:06:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:28.291 04:06:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.291 04:06:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.291 ************************************ 00:17:28.291 START TEST raid_superblock_test_4k 00:17:28.291 ************************************ 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86071 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86071 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86071 ']' 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.291 04:06:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.291 [2024-11-18 04:06:24.665867] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:28.291 [2024-11-18 04:06:24.665990] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86071 ] 00:17:28.291 [2024-11-18 04:06:24.837971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.551 [2024-11-18 04:06:24.940991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.551 [2024-11-18 04:06:25.125033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.551 [2024-11-18 04:06:25.125088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.122 malloc1 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.122 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.123 [2024-11-18 04:06:25.541001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:29.123 [2024-11-18 04:06:25.541078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.123 [2024-11-18 04:06:25.541101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:29.123 [2024-11-18 04:06:25.541109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.123 [2024-11-18 04:06:25.543158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.123 [2024-11-18 04:06:25.543196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:29.123 pt1 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.123 malloc2 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.123 [2024-11-18 04:06:25.595162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:29.123 [2024-11-18 04:06:25.595212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.123 [2024-11-18 04:06:25.595247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:29.123 [2024-11-18 04:06:25.595255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.123 [2024-11-18 04:06:25.597236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.123 [2024-11-18 04:06:25.597271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:29.123 pt2 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.123 [2024-11-18 04:06:25.607189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:29.123 [2024-11-18 04:06:25.608952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:29.123 [2024-11-18 04:06:25.609126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:29.123 [2024-11-18 04:06:25.609148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:29.123 [2024-11-18 04:06:25.609378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:29.123 [2024-11-18 04:06:25.609540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:29.123 [2024-11-18 04:06:25.609561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:29.123 [2024-11-18 04:06:25.609683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.123 "name": "raid_bdev1", 00:17:29.123 "uuid": "d0593a94-7d1d-4ed3-ac8e-1f31305f6947", 00:17:29.123 "strip_size_kb": 0, 00:17:29.123 "state": "online", 00:17:29.123 "raid_level": "raid1", 00:17:29.123 "superblock": true, 00:17:29.123 "num_base_bdevs": 2, 00:17:29.123 "num_base_bdevs_discovered": 2, 00:17:29.123 "num_base_bdevs_operational": 2, 00:17:29.123 "base_bdevs_list": [ 00:17:29.123 { 00:17:29.123 "name": "pt1", 00:17:29.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.123 "is_configured": true, 00:17:29.123 "data_offset": 256, 00:17:29.123 "data_size": 7936 00:17:29.123 }, 00:17:29.123 { 00:17:29.123 "name": "pt2", 00:17:29.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.123 "is_configured": true, 00:17:29.123 "data_offset": 256, 00:17:29.123 "data_size": 7936 00:17:29.123 } 00:17:29.123 ] 00:17:29.123 }' 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.123 04:06:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.694 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:29.694 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:29.694 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:29.694 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:29.694 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:29.694 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:29.694 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.694 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:29.694 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.694 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.694 [2024-11-18 04:06:26.114550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:29.695 "name": "raid_bdev1", 00:17:29.695 "aliases": [ 00:17:29.695 "d0593a94-7d1d-4ed3-ac8e-1f31305f6947" 00:17:29.695 ], 00:17:29.695 "product_name": "Raid Volume", 00:17:29.695 "block_size": 4096, 00:17:29.695 "num_blocks": 7936, 00:17:29.695 "uuid": "d0593a94-7d1d-4ed3-ac8e-1f31305f6947", 00:17:29.695 "assigned_rate_limits": { 00:17:29.695 "rw_ios_per_sec": 0, 00:17:29.695 "rw_mbytes_per_sec": 0, 00:17:29.695 "r_mbytes_per_sec": 0, 00:17:29.695 "w_mbytes_per_sec": 0 00:17:29.695 }, 00:17:29.695 "claimed": false, 00:17:29.695 "zoned": false, 00:17:29.695 "supported_io_types": { 00:17:29.695 "read": true, 00:17:29.695 "write": true, 00:17:29.695 "unmap": false, 00:17:29.695 "flush": false, 00:17:29.695 "reset": true, 00:17:29.695 "nvme_admin": false, 00:17:29.695 "nvme_io": false, 00:17:29.695 "nvme_io_md": false, 00:17:29.695 "write_zeroes": true, 00:17:29.695 "zcopy": false, 00:17:29.695 "get_zone_info": false, 00:17:29.695 "zone_management": false, 00:17:29.695 "zone_append": false, 00:17:29.695 "compare": false, 00:17:29.695 "compare_and_write": false, 00:17:29.695 "abort": false, 00:17:29.695 "seek_hole": false, 00:17:29.695 "seek_data": false, 00:17:29.695 "copy": false, 00:17:29.695 "nvme_iov_md": false 00:17:29.695 }, 00:17:29.695 "memory_domains": [ 00:17:29.695 { 00:17:29.695 "dma_device_id": "system", 00:17:29.695 "dma_device_type": 1 00:17:29.695 }, 00:17:29.695 { 00:17:29.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.695 "dma_device_type": 2 00:17:29.695 }, 00:17:29.695 { 00:17:29.695 "dma_device_id": "system", 00:17:29.695 "dma_device_type": 1 00:17:29.695 }, 00:17:29.695 { 00:17:29.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.695 "dma_device_type": 2 00:17:29.695 } 00:17:29.695 ], 00:17:29.695 "driver_specific": { 00:17:29.695 "raid": { 00:17:29.695 "uuid": "d0593a94-7d1d-4ed3-ac8e-1f31305f6947", 00:17:29.695 "strip_size_kb": 0, 00:17:29.695 "state": "online", 00:17:29.695 "raid_level": "raid1", 00:17:29.695 "superblock": true, 00:17:29.695 "num_base_bdevs": 2, 00:17:29.695 "num_base_bdevs_discovered": 2, 00:17:29.695 "num_base_bdevs_operational": 2, 00:17:29.695 "base_bdevs_list": [ 00:17:29.695 { 00:17:29.695 "name": "pt1", 00:17:29.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.695 "is_configured": true, 00:17:29.695 "data_offset": 256, 00:17:29.695 "data_size": 7936 00:17:29.695 }, 00:17:29.695 { 00:17:29.695 "name": "pt2", 00:17:29.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.695 "is_configured": true, 00:17:29.695 "data_offset": 256, 00:17:29.695 "data_size": 7936 00:17:29.695 } 00:17:29.695 ] 00:17:29.695 } 00:17:29.695 } 00:17:29.695 }' 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:29.695 pt2' 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.695 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:29.956 [2024-11-18 04:06:26.338137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d0593a94-7d1d-4ed3-ac8e-1f31305f6947 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z d0593a94-7d1d-4ed3-ac8e-1f31305f6947 ']' 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.956 [2024-11-18 04:06:26.385791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.956 [2024-11-18 04:06:26.385812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.956 [2024-11-18 04:06:26.385909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.956 [2024-11-18 04:06:26.385961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.956 [2024-11-18 04:06:26.385972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.956 [2024-11-18 04:06:26.517586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:29.956 [2024-11-18 04:06:26.519391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:29.956 [2024-11-18 04:06:26.519454] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:29.956 [2024-11-18 04:06:26.519503] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:29.956 [2024-11-18 04:06:26.519516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.956 [2024-11-18 04:06:26.519524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:29.956 request: 00:17:29.956 { 00:17:29.956 "name": "raid_bdev1", 00:17:29.956 "raid_level": "raid1", 00:17:29.956 "base_bdevs": [ 00:17:29.956 "malloc1", 00:17:29.956 "malloc2" 00:17:29.956 ], 00:17:29.956 "superblock": false, 00:17:29.956 "method": "bdev_raid_create", 00:17:29.956 "req_id": 1 00:17:29.956 } 00:17:29.956 Got JSON-RPC error response 00:17:29.956 response: 00:17:29.956 { 00:17:29.956 "code": -17, 00:17:29.956 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:29.956 } 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:29.956 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.957 [2024-11-18 04:06:26.585452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:29.957 [2024-11-18 04:06:26.585553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.957 [2024-11-18 04:06:26.585584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:29.957 [2024-11-18 04:06:26.585613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.957 [2024-11-18 04:06:26.587625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.957 [2024-11-18 04:06:26.587710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:29.957 [2024-11-18 04:06:26.587797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:29.957 [2024-11-18 04:06:26.587895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:29.957 pt1 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.957 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.217 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.217 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.217 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.217 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.217 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.217 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.217 "name": "raid_bdev1", 00:17:30.217 "uuid": "d0593a94-7d1d-4ed3-ac8e-1f31305f6947", 00:17:30.217 "strip_size_kb": 0, 00:17:30.217 "state": "configuring", 00:17:30.217 "raid_level": "raid1", 00:17:30.217 "superblock": true, 00:17:30.217 "num_base_bdevs": 2, 00:17:30.217 "num_base_bdevs_discovered": 1, 00:17:30.217 "num_base_bdevs_operational": 2, 00:17:30.217 "base_bdevs_list": [ 00:17:30.217 { 00:17:30.217 "name": "pt1", 00:17:30.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.217 "is_configured": true, 00:17:30.217 "data_offset": 256, 00:17:30.217 "data_size": 7936 00:17:30.217 }, 00:17:30.217 { 00:17:30.217 "name": null, 00:17:30.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.217 "is_configured": false, 00:17:30.217 "data_offset": 256, 00:17:30.217 "data_size": 7936 00:17:30.217 } 00:17:30.217 ] 00:17:30.217 }' 00:17:30.217 04:06:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.217 04:06:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 [2024-11-18 04:06:27.060659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.477 [2024-11-18 04:06:27.060758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.477 [2024-11-18 04:06:27.060781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:30.477 [2024-11-18 04:06:27.060791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.477 [2024-11-18 04:06:27.061204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.477 [2024-11-18 04:06:27.061224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.477 [2024-11-18 04:06:27.061292] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:30.477 [2024-11-18 04:06:27.061314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:30.477 [2024-11-18 04:06:27.061441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:30.477 [2024-11-18 04:06:27.061452] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:30.477 [2024-11-18 04:06:27.061684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:30.477 [2024-11-18 04:06:27.061850] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:30.477 [2024-11-18 04:06:27.061865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:30.477 [2024-11-18 04:06:27.061987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.477 pt2 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.477 "name": "raid_bdev1", 00:17:30.477 "uuid": "d0593a94-7d1d-4ed3-ac8e-1f31305f6947", 00:17:30.477 "strip_size_kb": 0, 00:17:30.477 "state": "online", 00:17:30.477 "raid_level": "raid1", 00:17:30.477 "superblock": true, 00:17:30.477 "num_base_bdevs": 2, 00:17:30.477 "num_base_bdevs_discovered": 2, 00:17:30.477 "num_base_bdevs_operational": 2, 00:17:30.477 "base_bdevs_list": [ 00:17:30.477 { 00:17:30.477 "name": "pt1", 00:17:30.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.477 "is_configured": true, 00:17:30.477 "data_offset": 256, 00:17:30.477 "data_size": 7936 00:17:30.477 }, 00:17:30.477 { 00:17:30.477 "name": "pt2", 00:17:30.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.477 "is_configured": true, 00:17:30.477 "data_offset": 256, 00:17:30.477 "data_size": 7936 00:17:30.477 } 00:17:30.477 ] 00:17:30.477 }' 00:17:30.477 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.736 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.996 [2024-11-18 04:06:27.516190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.996 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:30.996 "name": "raid_bdev1", 00:17:30.996 "aliases": [ 00:17:30.996 "d0593a94-7d1d-4ed3-ac8e-1f31305f6947" 00:17:30.996 ], 00:17:30.996 "product_name": "Raid Volume", 00:17:30.996 "block_size": 4096, 00:17:30.996 "num_blocks": 7936, 00:17:30.996 "uuid": "d0593a94-7d1d-4ed3-ac8e-1f31305f6947", 00:17:30.996 "assigned_rate_limits": { 00:17:30.996 "rw_ios_per_sec": 0, 00:17:30.996 "rw_mbytes_per_sec": 0, 00:17:30.996 "r_mbytes_per_sec": 0, 00:17:30.996 "w_mbytes_per_sec": 0 00:17:30.996 }, 00:17:30.996 "claimed": false, 00:17:30.996 "zoned": false, 00:17:30.996 "supported_io_types": { 00:17:30.996 "read": true, 00:17:30.996 "write": true, 00:17:30.996 "unmap": false, 00:17:30.996 "flush": false, 00:17:30.996 "reset": true, 00:17:30.996 "nvme_admin": false, 00:17:30.996 "nvme_io": false, 00:17:30.996 "nvme_io_md": false, 00:17:30.996 "write_zeroes": true, 00:17:30.996 "zcopy": false, 00:17:30.996 "get_zone_info": false, 00:17:30.996 "zone_management": false, 00:17:30.996 "zone_append": false, 00:17:30.996 "compare": false, 00:17:30.996 "compare_and_write": false, 00:17:30.996 "abort": false, 00:17:30.996 "seek_hole": false, 00:17:30.996 "seek_data": false, 00:17:30.996 "copy": false, 00:17:30.996 "nvme_iov_md": false 00:17:30.996 }, 00:17:30.996 "memory_domains": [ 00:17:30.996 { 00:17:30.996 "dma_device_id": "system", 00:17:30.996 "dma_device_type": 1 00:17:30.996 }, 00:17:30.996 { 00:17:30.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.996 "dma_device_type": 2 00:17:30.996 }, 00:17:30.996 { 00:17:30.996 "dma_device_id": "system", 00:17:30.996 "dma_device_type": 1 00:17:30.996 }, 00:17:30.996 { 00:17:30.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.996 "dma_device_type": 2 00:17:30.996 } 00:17:30.996 ], 00:17:30.996 "driver_specific": { 00:17:30.996 "raid": { 00:17:30.996 "uuid": "d0593a94-7d1d-4ed3-ac8e-1f31305f6947", 00:17:30.997 "strip_size_kb": 0, 00:17:30.997 "state": "online", 00:17:30.997 "raid_level": "raid1", 00:17:30.997 "superblock": true, 00:17:30.997 "num_base_bdevs": 2, 00:17:30.997 "num_base_bdevs_discovered": 2, 00:17:30.997 "num_base_bdevs_operational": 2, 00:17:30.997 "base_bdevs_list": [ 00:17:30.997 { 00:17:30.997 "name": "pt1", 00:17:30.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.997 "is_configured": true, 00:17:30.997 "data_offset": 256, 00:17:30.997 "data_size": 7936 00:17:30.997 }, 00:17:30.997 { 00:17:30.997 "name": "pt2", 00:17:30.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.997 "is_configured": true, 00:17:30.997 "data_offset": 256, 00:17:30.997 "data_size": 7936 00:17:30.997 } 00:17:30.997 ] 00:17:30.997 } 00:17:30.997 } 00:17:30.997 }' 00:17:30.997 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.997 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:30.997 pt2' 00:17:30.997 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.997 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:30.997 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.997 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:30.997 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.997 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.997 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.257 [2024-11-18 04:06:27.735779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' d0593a94-7d1d-4ed3-ac8e-1f31305f6947 '!=' d0593a94-7d1d-4ed3-ac8e-1f31305f6947 ']' 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.257 [2024-11-18 04:06:27.783525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.257 "name": "raid_bdev1", 00:17:31.257 "uuid": "d0593a94-7d1d-4ed3-ac8e-1f31305f6947", 00:17:31.257 "strip_size_kb": 0, 00:17:31.257 "state": "online", 00:17:31.257 "raid_level": "raid1", 00:17:31.257 "superblock": true, 00:17:31.257 "num_base_bdevs": 2, 00:17:31.257 "num_base_bdevs_discovered": 1, 00:17:31.257 "num_base_bdevs_operational": 1, 00:17:31.257 "base_bdevs_list": [ 00:17:31.257 { 00:17:31.257 "name": null, 00:17:31.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.257 "is_configured": false, 00:17:31.257 "data_offset": 0, 00:17:31.257 "data_size": 7936 00:17:31.257 }, 00:17:31.257 { 00:17:31.257 "name": "pt2", 00:17:31.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.257 "is_configured": true, 00:17:31.257 "data_offset": 256, 00:17:31.257 "data_size": 7936 00:17:31.257 } 00:17:31.257 ] 00:17:31.257 }' 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.257 04:06:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.825 [2024-11-18 04:06:28.266730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:31.825 [2024-11-18 04:06:28.266795] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:31.825 [2024-11-18 04:06:28.266899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.825 [2024-11-18 04:06:28.266957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.825 [2024-11-18 04:06:28.267022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:31.825 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.826 [2024-11-18 04:06:28.338603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.826 [2024-11-18 04:06:28.338713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.826 [2024-11-18 04:06:28.338747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:31.826 [2024-11-18 04:06:28.338778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.826 [2024-11-18 04:06:28.340850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.826 [2024-11-18 04:06:28.340933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.826 [2024-11-18 04:06:28.341028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:31.826 [2024-11-18 04:06:28.341106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.826 [2024-11-18 04:06:28.341246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:31.826 [2024-11-18 04:06:28.341285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:31.826 [2024-11-18 04:06:28.341517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:31.826 [2024-11-18 04:06:28.341699] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:31.826 [2024-11-18 04:06:28.341738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:31.826 [2024-11-18 04:06:28.341912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.826 pt2 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.826 "name": "raid_bdev1", 00:17:31.826 "uuid": "d0593a94-7d1d-4ed3-ac8e-1f31305f6947", 00:17:31.826 "strip_size_kb": 0, 00:17:31.826 "state": "online", 00:17:31.826 "raid_level": "raid1", 00:17:31.826 "superblock": true, 00:17:31.826 "num_base_bdevs": 2, 00:17:31.826 "num_base_bdevs_discovered": 1, 00:17:31.826 "num_base_bdevs_operational": 1, 00:17:31.826 "base_bdevs_list": [ 00:17:31.826 { 00:17:31.826 "name": null, 00:17:31.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.826 "is_configured": false, 00:17:31.826 "data_offset": 256, 00:17:31.826 "data_size": 7936 00:17:31.826 }, 00:17:31.826 { 00:17:31.826 "name": "pt2", 00:17:31.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.826 "is_configured": true, 00:17:31.826 "data_offset": 256, 00:17:31.826 "data_size": 7936 00:17:31.826 } 00:17:31.826 ] 00:17:31.826 }' 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.826 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.399 [2024-11-18 04:06:28.769835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.399 [2024-11-18 04:06:28.769856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.399 [2024-11-18 04:06:28.769899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.399 [2024-11-18 04:06:28.769934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.399 [2024-11-18 04:06:28.769942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.399 [2024-11-18 04:06:28.813770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:32.399 [2024-11-18 04:06:28.813877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.399 [2024-11-18 04:06:28.813914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:32.399 [2024-11-18 04:06:28.813941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.399 [2024-11-18 04:06:28.815936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.399 [2024-11-18 04:06:28.815998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:32.399 [2024-11-18 04:06:28.816105] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:32.399 [2024-11-18 04:06:28.816162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:32.399 [2024-11-18 04:06:28.816299] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:32.399 [2024-11-18 04:06:28.816351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.399 [2024-11-18 04:06:28.816387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:32.399 [2024-11-18 04:06:28.816483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.399 [2024-11-18 04:06:28.816582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:32.399 [2024-11-18 04:06:28.816617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:32.399 [2024-11-18 04:06:28.816862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:32.399 [2024-11-18 04:06:28.817032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:32.399 [2024-11-18 04:06:28.817073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:32.399 [2024-11-18 04:06:28.817238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.399 pt1 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.399 "name": "raid_bdev1", 00:17:32.399 "uuid": "d0593a94-7d1d-4ed3-ac8e-1f31305f6947", 00:17:32.399 "strip_size_kb": 0, 00:17:32.399 "state": "online", 00:17:32.399 "raid_level": "raid1", 00:17:32.399 "superblock": true, 00:17:32.399 "num_base_bdevs": 2, 00:17:32.399 "num_base_bdevs_discovered": 1, 00:17:32.399 "num_base_bdevs_operational": 1, 00:17:32.399 "base_bdevs_list": [ 00:17:32.399 { 00:17:32.399 "name": null, 00:17:32.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.399 "is_configured": false, 00:17:32.399 "data_offset": 256, 00:17:32.399 "data_size": 7936 00:17:32.399 }, 00:17:32.399 { 00:17:32.399 "name": "pt2", 00:17:32.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.399 "is_configured": true, 00:17:32.399 "data_offset": 256, 00:17:32.399 "data_size": 7936 00:17:32.399 } 00:17:32.399 ] 00:17:32.399 }' 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.399 04:06:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.664 04:06:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:32.664 04:06:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:32.664 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.664 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.664 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.664 04:06:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:32.664 04:06:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:32.664 04:06:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.664 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.664 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.664 [2024-11-18 04:06:29.289144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' d0593a94-7d1d-4ed3-ac8e-1f31305f6947 '!=' d0593a94-7d1d-4ed3-ac8e-1f31305f6947 ']' 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86071 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86071 ']' 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86071 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86071 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86071' 00:17:32.927 killing process with pid 86071 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86071 00:17:32.927 [2024-11-18 04:06:29.355858] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:32.927 [2024-11-18 04:06:29.355925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.927 [2024-11-18 04:06:29.355961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.927 [2024-11-18 04:06:29.355974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:32.927 04:06:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86071 00:17:32.927 [2024-11-18 04:06:29.547983] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.308 04:06:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:34.308 00:17:34.308 real 0m5.990s 00:17:34.308 user 0m9.137s 00:17:34.308 sys 0m1.128s 00:17:34.308 04:06:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.308 ************************************ 00:17:34.308 END TEST raid_superblock_test_4k 00:17:34.308 ************************************ 00:17:34.308 04:06:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.308 04:06:30 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:34.308 04:06:30 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:34.308 04:06:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:34.308 04:06:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.308 04:06:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.308 ************************************ 00:17:34.308 START TEST raid_rebuild_test_sb_4k 00:17:34.308 ************************************ 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86393 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86393 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86393 ']' 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.308 04:06:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.308 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:34.308 Zero copy mechanism will not be used. 00:17:34.308 [2024-11-18 04:06:30.753329] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:34.308 [2024-11-18 04:06:30.753452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86393 ] 00:17:34.308 [2024-11-18 04:06:30.918819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.568 [2024-11-18 04:06:31.021714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.829 [2024-11-18 04:06:31.213475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.829 [2024-11-18 04:06:31.213513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.090 BaseBdev1_malloc 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.090 [2024-11-18 04:06:31.600943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:35.090 [2024-11-18 04:06:31.601065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.090 [2024-11-18 04:06:31.601134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:35.090 [2024-11-18 04:06:31.601170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.090 [2024-11-18 04:06:31.603131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.090 [2024-11-18 04:06:31.603215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:35.090 BaseBdev1 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.090 BaseBdev2_malloc 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.090 [2024-11-18 04:06:31.650384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:35.090 [2024-11-18 04:06:31.650494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.090 [2024-11-18 04:06:31.650529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:35.090 [2024-11-18 04:06:31.650559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.090 [2024-11-18 04:06:31.652529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.090 [2024-11-18 04:06:31.652611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:35.090 BaseBdev2 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.090 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.349 spare_malloc 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.349 spare_delay 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.349 [2024-11-18 04:06:31.750272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:35.349 [2024-11-18 04:06:31.750381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.349 [2024-11-18 04:06:31.750416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:35.349 [2024-11-18 04:06:31.750448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.349 [2024-11-18 04:06:31.752456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.349 [2024-11-18 04:06:31.752527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:35.349 spare 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.349 [2024-11-18 04:06:31.762307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.349 [2024-11-18 04:06:31.764084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.349 [2024-11-18 04:06:31.764297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:35.349 [2024-11-18 04:06:31.764351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:35.349 [2024-11-18 04:06:31.764593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:35.349 [2024-11-18 04:06:31.764786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:35.349 [2024-11-18 04:06:31.764833] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:35.349 [2024-11-18 04:06:31.765011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.349 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.350 "name": "raid_bdev1", 00:17:35.350 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:35.350 "strip_size_kb": 0, 00:17:35.350 "state": "online", 00:17:35.350 "raid_level": "raid1", 00:17:35.350 "superblock": true, 00:17:35.350 "num_base_bdevs": 2, 00:17:35.350 "num_base_bdevs_discovered": 2, 00:17:35.350 "num_base_bdevs_operational": 2, 00:17:35.350 "base_bdevs_list": [ 00:17:35.350 { 00:17:35.350 "name": "BaseBdev1", 00:17:35.350 "uuid": "3dfc791c-ec63-51b8-9855-3d824259b205", 00:17:35.350 "is_configured": true, 00:17:35.350 "data_offset": 256, 00:17:35.350 "data_size": 7936 00:17:35.350 }, 00:17:35.350 { 00:17:35.350 "name": "BaseBdev2", 00:17:35.350 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:35.350 "is_configured": true, 00:17:35.350 "data_offset": 256, 00:17:35.350 "data_size": 7936 00:17:35.350 } 00:17:35.350 ] 00:17:35.350 }' 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.350 04:06:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.609 [2024-11-18 04:06:32.165848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:35.609 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:35.610 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:35.610 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:35.610 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:35.610 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:35.610 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:35.610 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:35.610 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:35.610 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:35.610 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:35.610 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:35.870 [2024-11-18 04:06:32.421175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:35.870 /dev/nbd0 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:35.870 1+0 records in 00:17:35.870 1+0 records out 00:17:35.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058991 s, 6.9 MB/s 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:35.870 04:06:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:36.810 7936+0 records in 00:17:36.810 7936+0 records out 00:17:36.810 32505856 bytes (33 MB, 31 MiB) copied, 0.644 s, 50.5 MB/s 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:36.810 [2024-11-18 04:06:33.360856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.810 [2024-11-18 04:06:33.384895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.810 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.810 "name": "raid_bdev1", 00:17:36.810 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:36.810 "strip_size_kb": 0, 00:17:36.810 "state": "online", 00:17:36.810 "raid_level": "raid1", 00:17:36.810 "superblock": true, 00:17:36.810 "num_base_bdevs": 2, 00:17:36.810 "num_base_bdevs_discovered": 1, 00:17:36.810 "num_base_bdevs_operational": 1, 00:17:36.810 "base_bdevs_list": [ 00:17:36.810 { 00:17:36.810 "name": null, 00:17:36.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.810 "is_configured": false, 00:17:36.810 "data_offset": 0, 00:17:36.810 "data_size": 7936 00:17:36.810 }, 00:17:36.810 { 00:17:36.810 "name": "BaseBdev2", 00:17:36.810 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:36.811 "is_configured": true, 00:17:36.811 "data_offset": 256, 00:17:36.811 "data_size": 7936 00:17:36.811 } 00:17:36.811 ] 00:17:36.811 }' 00:17:36.811 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.811 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.380 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:37.381 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.381 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.381 [2024-11-18 04:06:33.844158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.381 [2024-11-18 04:06:33.861993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:37.381 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.381 04:06:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:37.381 [2024-11-18 04:06:33.863834] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:38.320 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.320 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.320 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.320 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.320 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.320 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.320 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.320 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.320 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.320 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.320 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.320 "name": "raid_bdev1", 00:17:38.320 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:38.320 "strip_size_kb": 0, 00:17:38.320 "state": "online", 00:17:38.320 "raid_level": "raid1", 00:17:38.320 "superblock": true, 00:17:38.320 "num_base_bdevs": 2, 00:17:38.320 "num_base_bdevs_discovered": 2, 00:17:38.320 "num_base_bdevs_operational": 2, 00:17:38.320 "process": { 00:17:38.320 "type": "rebuild", 00:17:38.320 "target": "spare", 00:17:38.320 "progress": { 00:17:38.320 "blocks": 2560, 00:17:38.320 "percent": 32 00:17:38.320 } 00:17:38.320 }, 00:17:38.321 "base_bdevs_list": [ 00:17:38.321 { 00:17:38.321 "name": "spare", 00:17:38.321 "uuid": "3905c02d-98a1-502d-8c93-cf0796870f99", 00:17:38.321 "is_configured": true, 00:17:38.321 "data_offset": 256, 00:17:38.321 "data_size": 7936 00:17:38.321 }, 00:17:38.321 { 00:17:38.321 "name": "BaseBdev2", 00:17:38.321 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:38.321 "is_configured": true, 00:17:38.321 "data_offset": 256, 00:17:38.321 "data_size": 7936 00:17:38.321 } 00:17:38.321 ] 00:17:38.321 }' 00:17:38.321 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.581 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.581 04:06:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.581 [2024-11-18 04:06:35.030979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.581 [2024-11-18 04:06:35.068529] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:38.581 [2024-11-18 04:06:35.068652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.581 [2024-11-18 04:06:35.068688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.581 [2024-11-18 04:06:35.068711] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.581 "name": "raid_bdev1", 00:17:38.581 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:38.581 "strip_size_kb": 0, 00:17:38.581 "state": "online", 00:17:38.581 "raid_level": "raid1", 00:17:38.581 "superblock": true, 00:17:38.581 "num_base_bdevs": 2, 00:17:38.581 "num_base_bdevs_discovered": 1, 00:17:38.581 "num_base_bdevs_operational": 1, 00:17:38.581 "base_bdevs_list": [ 00:17:38.581 { 00:17:38.581 "name": null, 00:17:38.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.581 "is_configured": false, 00:17:38.581 "data_offset": 0, 00:17:38.581 "data_size": 7936 00:17:38.581 }, 00:17:38.581 { 00:17:38.581 "name": "BaseBdev2", 00:17:38.581 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:38.581 "is_configured": true, 00:17:38.581 "data_offset": 256, 00:17:38.581 "data_size": 7936 00:17:38.581 } 00:17:38.581 ] 00:17:38.581 }' 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.581 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.151 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.151 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.151 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.151 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.152 "name": "raid_bdev1", 00:17:39.152 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:39.152 "strip_size_kb": 0, 00:17:39.152 "state": "online", 00:17:39.152 "raid_level": "raid1", 00:17:39.152 "superblock": true, 00:17:39.152 "num_base_bdevs": 2, 00:17:39.152 "num_base_bdevs_discovered": 1, 00:17:39.152 "num_base_bdevs_operational": 1, 00:17:39.152 "base_bdevs_list": [ 00:17:39.152 { 00:17:39.152 "name": null, 00:17:39.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.152 "is_configured": false, 00:17:39.152 "data_offset": 0, 00:17:39.152 "data_size": 7936 00:17:39.152 }, 00:17:39.152 { 00:17:39.152 "name": "BaseBdev2", 00:17:39.152 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:39.152 "is_configured": true, 00:17:39.152 "data_offset": 256, 00:17:39.152 "data_size": 7936 00:17:39.152 } 00:17:39.152 ] 00:17:39.152 }' 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.152 [2024-11-18 04:06:35.704422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.152 [2024-11-18 04:06:35.719478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.152 04:06:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:39.152 [2024-11-18 04:06:35.721287] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.093 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.093 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.093 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.093 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.093 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.353 "name": "raid_bdev1", 00:17:40.353 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:40.353 "strip_size_kb": 0, 00:17:40.353 "state": "online", 00:17:40.353 "raid_level": "raid1", 00:17:40.353 "superblock": true, 00:17:40.353 "num_base_bdevs": 2, 00:17:40.353 "num_base_bdevs_discovered": 2, 00:17:40.353 "num_base_bdevs_operational": 2, 00:17:40.353 "process": { 00:17:40.353 "type": "rebuild", 00:17:40.353 "target": "spare", 00:17:40.353 "progress": { 00:17:40.353 "blocks": 2560, 00:17:40.353 "percent": 32 00:17:40.353 } 00:17:40.353 }, 00:17:40.353 "base_bdevs_list": [ 00:17:40.353 { 00:17:40.353 "name": "spare", 00:17:40.353 "uuid": "3905c02d-98a1-502d-8c93-cf0796870f99", 00:17:40.353 "is_configured": true, 00:17:40.353 "data_offset": 256, 00:17:40.353 "data_size": 7936 00:17:40.353 }, 00:17:40.353 { 00:17:40.353 "name": "BaseBdev2", 00:17:40.353 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:40.353 "is_configured": true, 00:17:40.353 "data_offset": 256, 00:17:40.353 "data_size": 7936 00:17:40.353 } 00:17:40.353 ] 00:17:40.353 }' 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:40.353 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=670 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.353 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.353 "name": "raid_bdev1", 00:17:40.353 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:40.353 "strip_size_kb": 0, 00:17:40.353 "state": "online", 00:17:40.353 "raid_level": "raid1", 00:17:40.353 "superblock": true, 00:17:40.353 "num_base_bdevs": 2, 00:17:40.353 "num_base_bdevs_discovered": 2, 00:17:40.353 "num_base_bdevs_operational": 2, 00:17:40.353 "process": { 00:17:40.353 "type": "rebuild", 00:17:40.353 "target": "spare", 00:17:40.353 "progress": { 00:17:40.353 "blocks": 2816, 00:17:40.353 "percent": 35 00:17:40.353 } 00:17:40.353 }, 00:17:40.353 "base_bdevs_list": [ 00:17:40.353 { 00:17:40.353 "name": "spare", 00:17:40.353 "uuid": "3905c02d-98a1-502d-8c93-cf0796870f99", 00:17:40.353 "is_configured": true, 00:17:40.353 "data_offset": 256, 00:17:40.353 "data_size": 7936 00:17:40.353 }, 00:17:40.353 { 00:17:40.353 "name": "BaseBdev2", 00:17:40.353 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:40.353 "is_configured": true, 00:17:40.353 "data_offset": 256, 00:17:40.353 "data_size": 7936 00:17:40.353 } 00:17:40.353 ] 00:17:40.354 }' 00:17:40.354 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.354 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.354 04:06:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.614 04:06:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.614 04:06:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.554 "name": "raid_bdev1", 00:17:41.554 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:41.554 "strip_size_kb": 0, 00:17:41.554 "state": "online", 00:17:41.554 "raid_level": "raid1", 00:17:41.554 "superblock": true, 00:17:41.554 "num_base_bdevs": 2, 00:17:41.554 "num_base_bdevs_discovered": 2, 00:17:41.554 "num_base_bdevs_operational": 2, 00:17:41.554 "process": { 00:17:41.554 "type": "rebuild", 00:17:41.554 "target": "spare", 00:17:41.554 "progress": { 00:17:41.554 "blocks": 5888, 00:17:41.554 "percent": 74 00:17:41.554 } 00:17:41.554 }, 00:17:41.554 "base_bdevs_list": [ 00:17:41.554 { 00:17:41.554 "name": "spare", 00:17:41.554 "uuid": "3905c02d-98a1-502d-8c93-cf0796870f99", 00:17:41.554 "is_configured": true, 00:17:41.554 "data_offset": 256, 00:17:41.554 "data_size": 7936 00:17:41.554 }, 00:17:41.554 { 00:17:41.554 "name": "BaseBdev2", 00:17:41.554 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:41.554 "is_configured": true, 00:17:41.554 "data_offset": 256, 00:17:41.554 "data_size": 7936 00:17:41.554 } 00:17:41.554 ] 00:17:41.554 }' 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.554 04:06:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.496 [2024-11-18 04:06:38.832894] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:42.496 [2024-11-18 04:06:38.833022] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:42.496 [2024-11-18 04:06:38.833152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.755 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.755 "name": "raid_bdev1", 00:17:42.755 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:42.755 "strip_size_kb": 0, 00:17:42.755 "state": "online", 00:17:42.755 "raid_level": "raid1", 00:17:42.755 "superblock": true, 00:17:42.755 "num_base_bdevs": 2, 00:17:42.755 "num_base_bdevs_discovered": 2, 00:17:42.755 "num_base_bdevs_operational": 2, 00:17:42.756 "base_bdevs_list": [ 00:17:42.756 { 00:17:42.756 "name": "spare", 00:17:42.756 "uuid": "3905c02d-98a1-502d-8c93-cf0796870f99", 00:17:42.756 "is_configured": true, 00:17:42.756 "data_offset": 256, 00:17:42.756 "data_size": 7936 00:17:42.756 }, 00:17:42.756 { 00:17:42.756 "name": "BaseBdev2", 00:17:42.756 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:42.756 "is_configured": true, 00:17:42.756 "data_offset": 256, 00:17:42.756 "data_size": 7936 00:17:42.756 } 00:17:42.756 ] 00:17:42.756 }' 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.756 "name": "raid_bdev1", 00:17:42.756 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:42.756 "strip_size_kb": 0, 00:17:42.756 "state": "online", 00:17:42.756 "raid_level": "raid1", 00:17:42.756 "superblock": true, 00:17:42.756 "num_base_bdevs": 2, 00:17:42.756 "num_base_bdevs_discovered": 2, 00:17:42.756 "num_base_bdevs_operational": 2, 00:17:42.756 "base_bdevs_list": [ 00:17:42.756 { 00:17:42.756 "name": "spare", 00:17:42.756 "uuid": "3905c02d-98a1-502d-8c93-cf0796870f99", 00:17:42.756 "is_configured": true, 00:17:42.756 "data_offset": 256, 00:17:42.756 "data_size": 7936 00:17:42.756 }, 00:17:42.756 { 00:17:42.756 "name": "BaseBdev2", 00:17:42.756 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:42.756 "is_configured": true, 00:17:42.756 "data_offset": 256, 00:17:42.756 "data_size": 7936 00:17:42.756 } 00:17:42.756 ] 00:17:42.756 }' 00:17:42.756 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.013 "name": "raid_bdev1", 00:17:43.013 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:43.013 "strip_size_kb": 0, 00:17:43.013 "state": "online", 00:17:43.013 "raid_level": "raid1", 00:17:43.013 "superblock": true, 00:17:43.013 "num_base_bdevs": 2, 00:17:43.013 "num_base_bdevs_discovered": 2, 00:17:43.013 "num_base_bdevs_operational": 2, 00:17:43.013 "base_bdevs_list": [ 00:17:43.013 { 00:17:43.013 "name": "spare", 00:17:43.013 "uuid": "3905c02d-98a1-502d-8c93-cf0796870f99", 00:17:43.013 "is_configured": true, 00:17:43.013 "data_offset": 256, 00:17:43.013 "data_size": 7936 00:17:43.013 }, 00:17:43.013 { 00:17:43.013 "name": "BaseBdev2", 00:17:43.013 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:43.013 "is_configured": true, 00:17:43.013 "data_offset": 256, 00:17:43.013 "data_size": 7936 00:17:43.013 } 00:17:43.013 ] 00:17:43.013 }' 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.013 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.582 [2024-11-18 04:06:39.936121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.582 [2024-11-18 04:06:39.936207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.582 [2024-11-18 04:06:39.936290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.582 [2024-11-18 04:06:39.936368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.582 [2024-11-18 04:06:39.936428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.582 04:06:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:43.582 /dev/nbd0 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.842 1+0 records in 00:17:43.842 1+0 records out 00:17:43.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437867 s, 9.4 MB/s 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.842 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:43.842 /dev/nbd1 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:44.102 1+0 records in 00:17:44.102 1+0 records out 00:17:44.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346563 s, 11.8 MB/s 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.102 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:44.362 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:44.362 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:44.362 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:44.362 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.362 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.363 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:44.363 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:44.363 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.363 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.363 04:06:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.623 [2024-11-18 04:06:41.123818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.623 [2024-11-18 04:06:41.123931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.623 [2024-11-18 04:06:41.123969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:44.623 [2024-11-18 04:06:41.123997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.623 [2024-11-18 04:06:41.126046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.623 [2024-11-18 04:06:41.126114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.623 [2024-11-18 04:06:41.126232] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:44.623 [2024-11-18 04:06:41.126302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.623 [2024-11-18 04:06:41.126491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.623 spare 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.623 [2024-11-18 04:06:41.226424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:44.623 [2024-11-18 04:06:41.226483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:44.623 [2024-11-18 04:06:41.226760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:44.623 [2024-11-18 04:06:41.226968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:44.623 [2024-11-18 04:06:41.227011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:44.623 [2024-11-18 04:06:41.227192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.623 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.883 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.883 "name": "raid_bdev1", 00:17:44.883 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:44.883 "strip_size_kb": 0, 00:17:44.883 "state": "online", 00:17:44.883 "raid_level": "raid1", 00:17:44.883 "superblock": true, 00:17:44.883 "num_base_bdevs": 2, 00:17:44.883 "num_base_bdevs_discovered": 2, 00:17:44.883 "num_base_bdevs_operational": 2, 00:17:44.883 "base_bdevs_list": [ 00:17:44.883 { 00:17:44.883 "name": "spare", 00:17:44.883 "uuid": "3905c02d-98a1-502d-8c93-cf0796870f99", 00:17:44.883 "is_configured": true, 00:17:44.883 "data_offset": 256, 00:17:44.883 "data_size": 7936 00:17:44.883 }, 00:17:44.883 { 00:17:44.883 "name": "BaseBdev2", 00:17:44.883 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:44.883 "is_configured": true, 00:17:44.883 "data_offset": 256, 00:17:44.883 "data_size": 7936 00:17:44.883 } 00:17:44.883 ] 00:17:44.883 }' 00:17:44.883 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.883 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.143 "name": "raid_bdev1", 00:17:45.143 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:45.143 "strip_size_kb": 0, 00:17:45.143 "state": "online", 00:17:45.143 "raid_level": "raid1", 00:17:45.143 "superblock": true, 00:17:45.143 "num_base_bdevs": 2, 00:17:45.143 "num_base_bdevs_discovered": 2, 00:17:45.143 "num_base_bdevs_operational": 2, 00:17:45.143 "base_bdevs_list": [ 00:17:45.143 { 00:17:45.143 "name": "spare", 00:17:45.143 "uuid": "3905c02d-98a1-502d-8c93-cf0796870f99", 00:17:45.143 "is_configured": true, 00:17:45.143 "data_offset": 256, 00:17:45.143 "data_size": 7936 00:17:45.143 }, 00:17:45.143 { 00:17:45.143 "name": "BaseBdev2", 00:17:45.143 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:45.143 "is_configured": true, 00:17:45.143 "data_offset": 256, 00:17:45.143 "data_size": 7936 00:17:45.143 } 00:17:45.143 ] 00:17:45.143 }' 00:17:45.143 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.403 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.403 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.404 [2024-11-18 04:06:41.890572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.404 "name": "raid_bdev1", 00:17:45.404 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:45.404 "strip_size_kb": 0, 00:17:45.404 "state": "online", 00:17:45.404 "raid_level": "raid1", 00:17:45.404 "superblock": true, 00:17:45.404 "num_base_bdevs": 2, 00:17:45.404 "num_base_bdevs_discovered": 1, 00:17:45.404 "num_base_bdevs_operational": 1, 00:17:45.404 "base_bdevs_list": [ 00:17:45.404 { 00:17:45.404 "name": null, 00:17:45.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.404 "is_configured": false, 00:17:45.404 "data_offset": 0, 00:17:45.404 "data_size": 7936 00:17:45.404 }, 00:17:45.404 { 00:17:45.404 "name": "BaseBdev2", 00:17:45.404 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:45.404 "is_configured": true, 00:17:45.404 "data_offset": 256, 00:17:45.404 "data_size": 7936 00:17:45.404 } 00:17:45.404 ] 00:17:45.404 }' 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.404 04:06:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.972 04:06:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:45.972 04:06:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.972 04:06:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.972 [2024-11-18 04:06:42.389758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.972 [2024-11-18 04:06:42.389986] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.972 [2024-11-18 04:06:42.390045] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:45.972 [2024-11-18 04:06:42.390098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.972 [2024-11-18 04:06:42.405369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:45.972 04:06:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.972 04:06:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:45.972 [2024-11-18 04:06:42.407209] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.913 "name": "raid_bdev1", 00:17:46.913 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:46.913 "strip_size_kb": 0, 00:17:46.913 "state": "online", 00:17:46.913 "raid_level": "raid1", 00:17:46.913 "superblock": true, 00:17:46.913 "num_base_bdevs": 2, 00:17:46.913 "num_base_bdevs_discovered": 2, 00:17:46.913 "num_base_bdevs_operational": 2, 00:17:46.913 "process": { 00:17:46.913 "type": "rebuild", 00:17:46.913 "target": "spare", 00:17:46.913 "progress": { 00:17:46.913 "blocks": 2560, 00:17:46.913 "percent": 32 00:17:46.913 } 00:17:46.913 }, 00:17:46.913 "base_bdevs_list": [ 00:17:46.913 { 00:17:46.913 "name": "spare", 00:17:46.913 "uuid": "3905c02d-98a1-502d-8c93-cf0796870f99", 00:17:46.913 "is_configured": true, 00:17:46.913 "data_offset": 256, 00:17:46.913 "data_size": 7936 00:17:46.913 }, 00:17:46.913 { 00:17:46.913 "name": "BaseBdev2", 00:17:46.913 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:46.913 "is_configured": true, 00:17:46.913 "data_offset": 256, 00:17:46.913 "data_size": 7936 00:17:46.913 } 00:17:46.913 ] 00:17:46.913 }' 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.913 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.173 [2024-11-18 04:06:43.571494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.173 [2024-11-18 04:06:43.611877] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:47.173 [2024-11-18 04:06:43.611973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.173 [2024-11-18 04:06:43.612024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.173 [2024-11-18 04:06:43.612046] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.173 "name": "raid_bdev1", 00:17:47.173 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:47.173 "strip_size_kb": 0, 00:17:47.173 "state": "online", 00:17:47.173 "raid_level": "raid1", 00:17:47.173 "superblock": true, 00:17:47.173 "num_base_bdevs": 2, 00:17:47.173 "num_base_bdevs_discovered": 1, 00:17:47.173 "num_base_bdevs_operational": 1, 00:17:47.173 "base_bdevs_list": [ 00:17:47.173 { 00:17:47.173 "name": null, 00:17:47.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.173 "is_configured": false, 00:17:47.173 "data_offset": 0, 00:17:47.173 "data_size": 7936 00:17:47.173 }, 00:17:47.173 { 00:17:47.173 "name": "BaseBdev2", 00:17:47.173 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:47.173 "is_configured": true, 00:17:47.173 "data_offset": 256, 00:17:47.173 "data_size": 7936 00:17:47.173 } 00:17:47.173 ] 00:17:47.173 }' 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.173 04:06:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 04:06:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:47.744 04:06:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.744 04:06:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.744 [2024-11-18 04:06:44.120464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.744 [2024-11-18 04:06:44.120566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.744 [2024-11-18 04:06:44.120604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:47.744 [2024-11-18 04:06:44.120633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.744 [2024-11-18 04:06:44.121110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.744 [2024-11-18 04:06:44.121169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.744 [2024-11-18 04:06:44.121291] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:47.744 [2024-11-18 04:06:44.121331] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:47.744 [2024-11-18 04:06:44.121376] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:47.744 [2024-11-18 04:06:44.121419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.744 [2024-11-18 04:06:44.136366] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:47.744 spare 00:17:47.744 04:06:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.744 [2024-11-18 04:06:44.138195] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:47.744 04:06:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.685 "name": "raid_bdev1", 00:17:48.685 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:48.685 "strip_size_kb": 0, 00:17:48.685 "state": "online", 00:17:48.685 "raid_level": "raid1", 00:17:48.685 "superblock": true, 00:17:48.685 "num_base_bdevs": 2, 00:17:48.685 "num_base_bdevs_discovered": 2, 00:17:48.685 "num_base_bdevs_operational": 2, 00:17:48.685 "process": { 00:17:48.685 "type": "rebuild", 00:17:48.685 "target": "spare", 00:17:48.685 "progress": { 00:17:48.685 "blocks": 2560, 00:17:48.685 "percent": 32 00:17:48.685 } 00:17:48.685 }, 00:17:48.685 "base_bdevs_list": [ 00:17:48.685 { 00:17:48.685 "name": "spare", 00:17:48.685 "uuid": "3905c02d-98a1-502d-8c93-cf0796870f99", 00:17:48.685 "is_configured": true, 00:17:48.685 "data_offset": 256, 00:17:48.685 "data_size": 7936 00:17:48.685 }, 00:17:48.685 { 00:17:48.685 "name": "BaseBdev2", 00:17:48.685 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:48.685 "is_configured": true, 00:17:48.685 "data_offset": 256, 00:17:48.685 "data_size": 7936 00:17:48.685 } 00:17:48.685 ] 00:17:48.685 }' 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.685 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.685 [2024-11-18 04:06:45.301923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.946 [2024-11-18 04:06:45.342854] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.946 [2024-11-18 04:06:45.342949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.946 [2024-11-18 04:06:45.343001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.946 [2024-11-18 04:06:45.343022] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.946 "name": "raid_bdev1", 00:17:48.946 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:48.946 "strip_size_kb": 0, 00:17:48.946 "state": "online", 00:17:48.946 "raid_level": "raid1", 00:17:48.946 "superblock": true, 00:17:48.946 "num_base_bdevs": 2, 00:17:48.946 "num_base_bdevs_discovered": 1, 00:17:48.946 "num_base_bdevs_operational": 1, 00:17:48.946 "base_bdevs_list": [ 00:17:48.946 { 00:17:48.946 "name": null, 00:17:48.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.946 "is_configured": false, 00:17:48.946 "data_offset": 0, 00:17:48.946 "data_size": 7936 00:17:48.946 }, 00:17:48.946 { 00:17:48.946 "name": "BaseBdev2", 00:17:48.946 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:48.946 "is_configured": true, 00:17:48.946 "data_offset": 256, 00:17:48.946 "data_size": 7936 00:17:48.946 } 00:17:48.946 ] 00:17:48.946 }' 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.946 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.206 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.206 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.206 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.206 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.206 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.206 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.206 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.206 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.206 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.206 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.467 "name": "raid_bdev1", 00:17:49.467 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:49.467 "strip_size_kb": 0, 00:17:49.467 "state": "online", 00:17:49.467 "raid_level": "raid1", 00:17:49.467 "superblock": true, 00:17:49.467 "num_base_bdevs": 2, 00:17:49.467 "num_base_bdevs_discovered": 1, 00:17:49.467 "num_base_bdevs_operational": 1, 00:17:49.467 "base_bdevs_list": [ 00:17:49.467 { 00:17:49.467 "name": null, 00:17:49.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.467 "is_configured": false, 00:17:49.467 "data_offset": 0, 00:17:49.467 "data_size": 7936 00:17:49.467 }, 00:17:49.467 { 00:17:49.467 "name": "BaseBdev2", 00:17:49.467 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:49.467 "is_configured": true, 00:17:49.467 "data_offset": 256, 00:17:49.467 "data_size": 7936 00:17:49.467 } 00:17:49.467 ] 00:17:49.467 }' 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.467 [2024-11-18 04:06:45.985828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:49.467 [2024-11-18 04:06:45.985943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.467 [2024-11-18 04:06:45.985981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:49.467 [2024-11-18 04:06:45.986002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.467 [2024-11-18 04:06:45.986440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.467 [2024-11-18 04:06:45.986457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:49.467 [2024-11-18 04:06:45.986525] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:49.467 [2024-11-18 04:06:45.986537] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.467 [2024-11-18 04:06:45.986548] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:49.467 [2024-11-18 04:06:45.986557] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:49.467 BaseBdev1 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.467 04:06:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:50.458 04:06:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.458 04:06:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.458 04:06:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.458 04:06:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.458 04:06:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.458 04:06:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.458 04:06:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.458 04:06:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.458 04:06:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.458 04:06:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.458 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.458 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.458 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.458 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.458 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.458 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.458 "name": "raid_bdev1", 00:17:50.458 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:50.458 "strip_size_kb": 0, 00:17:50.458 "state": "online", 00:17:50.458 "raid_level": "raid1", 00:17:50.458 "superblock": true, 00:17:50.458 "num_base_bdevs": 2, 00:17:50.458 "num_base_bdevs_discovered": 1, 00:17:50.458 "num_base_bdevs_operational": 1, 00:17:50.458 "base_bdevs_list": [ 00:17:50.458 { 00:17:50.458 "name": null, 00:17:50.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.458 "is_configured": false, 00:17:50.458 "data_offset": 0, 00:17:50.458 "data_size": 7936 00:17:50.458 }, 00:17:50.458 { 00:17:50.458 "name": "BaseBdev2", 00:17:50.458 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:50.458 "is_configured": true, 00:17:50.458 "data_offset": 256, 00:17:50.458 "data_size": 7936 00:17:50.458 } 00:17:50.458 ] 00:17:50.458 }' 00:17:50.458 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.458 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.028 "name": "raid_bdev1", 00:17:51.028 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:51.028 "strip_size_kb": 0, 00:17:51.028 "state": "online", 00:17:51.028 "raid_level": "raid1", 00:17:51.028 "superblock": true, 00:17:51.028 "num_base_bdevs": 2, 00:17:51.028 "num_base_bdevs_discovered": 1, 00:17:51.028 "num_base_bdevs_operational": 1, 00:17:51.028 "base_bdevs_list": [ 00:17:51.028 { 00:17:51.028 "name": null, 00:17:51.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.028 "is_configured": false, 00:17:51.028 "data_offset": 0, 00:17:51.028 "data_size": 7936 00:17:51.028 }, 00:17:51.028 { 00:17:51.028 "name": "BaseBdev2", 00:17:51.028 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:51.028 "is_configured": true, 00:17:51.028 "data_offset": 256, 00:17:51.028 "data_size": 7936 00:17:51.028 } 00:17:51.028 ] 00:17:51.028 }' 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.028 [2024-11-18 04:06:47.619016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.028 [2024-11-18 04:06:47.619219] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.028 [2024-11-18 04:06:47.619275] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:51.028 request: 00:17:51.028 { 00:17:51.028 "base_bdev": "BaseBdev1", 00:17:51.028 "raid_bdev": "raid_bdev1", 00:17:51.028 "method": "bdev_raid_add_base_bdev", 00:17:51.028 "req_id": 1 00:17:51.028 } 00:17:51.028 Got JSON-RPC error response 00:17:51.028 response: 00:17:51.028 { 00:17:51.028 "code": -22, 00:17:51.028 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:51.028 } 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.028 04:06:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.411 "name": "raid_bdev1", 00:17:52.411 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:52.411 "strip_size_kb": 0, 00:17:52.411 "state": "online", 00:17:52.411 "raid_level": "raid1", 00:17:52.411 "superblock": true, 00:17:52.411 "num_base_bdevs": 2, 00:17:52.411 "num_base_bdevs_discovered": 1, 00:17:52.411 "num_base_bdevs_operational": 1, 00:17:52.411 "base_bdevs_list": [ 00:17:52.411 { 00:17:52.411 "name": null, 00:17:52.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.411 "is_configured": false, 00:17:52.411 "data_offset": 0, 00:17:52.411 "data_size": 7936 00:17:52.411 }, 00:17:52.411 { 00:17:52.411 "name": "BaseBdev2", 00:17:52.411 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:52.411 "is_configured": true, 00:17:52.411 "data_offset": 256, 00:17:52.411 "data_size": 7936 00:17:52.411 } 00:17:52.411 ] 00:17:52.411 }' 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.411 04:06:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.676 "name": "raid_bdev1", 00:17:52.676 "uuid": "848cd337-a268-404c-a1d6-1c17bb60e519", 00:17:52.676 "strip_size_kb": 0, 00:17:52.676 "state": "online", 00:17:52.676 "raid_level": "raid1", 00:17:52.676 "superblock": true, 00:17:52.676 "num_base_bdevs": 2, 00:17:52.676 "num_base_bdevs_discovered": 1, 00:17:52.676 "num_base_bdevs_operational": 1, 00:17:52.676 "base_bdevs_list": [ 00:17:52.676 { 00:17:52.676 "name": null, 00:17:52.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.676 "is_configured": false, 00:17:52.676 "data_offset": 0, 00:17:52.676 "data_size": 7936 00:17:52.676 }, 00:17:52.676 { 00:17:52.676 "name": "BaseBdev2", 00:17:52.676 "uuid": "84780ce5-529b-5fff-a830-6d7db61bd6b1", 00:17:52.676 "is_configured": true, 00:17:52.676 "data_offset": 256, 00:17:52.676 "data_size": 7936 00:17:52.676 } 00:17:52.676 ] 00:17:52.676 }' 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86393 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86393 ']' 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86393 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86393 00:17:52.676 killing process with pid 86393 00:17:52.676 Received shutdown signal, test time was about 60.000000 seconds 00:17:52.676 00:17:52.676 Latency(us) 00:17:52.676 [2024-11-18T04:06:49.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.676 [2024-11-18T04:06:49.317Z] =================================================================================================================== 00:17:52.676 [2024-11-18T04:06:49.317Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86393' 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86393 00:17:52.676 [2024-11-18 04:06:49.286838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.676 [2024-11-18 04:06:49.286951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.676 [2024-11-18 04:06:49.286996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.676 [2024-11-18 04:06:49.287007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:52.676 04:06:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86393 00:17:52.938 [2024-11-18 04:06:49.562876] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.321 04:06:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:54.321 00:17:54.321 real 0m19.935s 00:17:54.321 user 0m26.132s 00:17:54.321 sys 0m2.771s 00:17:54.321 ************************************ 00:17:54.321 END TEST raid_rebuild_test_sb_4k 00:17:54.321 ************************************ 00:17:54.321 04:06:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.321 04:06:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.321 04:06:50 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:54.321 04:06:50 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:54.321 04:06:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:54.321 04:06:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.321 04:06:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.321 ************************************ 00:17:54.321 START TEST raid_state_function_test_sb_md_separate 00:17:54.321 ************************************ 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:54.321 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:54.322 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87084 00:17:54.322 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:54.322 Process raid pid: 87084 00:17:54.322 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87084' 00:17:54.322 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87084 00:17:54.322 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87084 ']' 00:17:54.322 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.322 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.322 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.322 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.322 04:06:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.322 [2024-11-18 04:06:50.761944] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:54.322 [2024-11-18 04:06:50.762067] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.322 [2024-11-18 04:06:50.941659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.582 [2024-11-18 04:06:51.044753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.842 [2024-11-18 04:06:51.230101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.842 [2024-11-18 04:06:51.230135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.101 [2024-11-18 04:06:51.591166] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.101 [2024-11-18 04:06:51.591266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.101 [2024-11-18 04:06:51.591295] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.101 [2024-11-18 04:06:51.591318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.101 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.102 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.102 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.102 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.102 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.102 "name": "Existed_Raid", 00:17:55.102 "uuid": "ec850c41-c937-45b2-ba72-891f7c68b6b4", 00:17:55.102 "strip_size_kb": 0, 00:17:55.102 "state": "configuring", 00:17:55.102 "raid_level": "raid1", 00:17:55.102 "superblock": true, 00:17:55.102 "num_base_bdevs": 2, 00:17:55.102 "num_base_bdevs_discovered": 0, 00:17:55.102 "num_base_bdevs_operational": 2, 00:17:55.102 "base_bdevs_list": [ 00:17:55.102 { 00:17:55.102 "name": "BaseBdev1", 00:17:55.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.102 "is_configured": false, 00:17:55.102 "data_offset": 0, 00:17:55.102 "data_size": 0 00:17:55.102 }, 00:17:55.102 { 00:17:55.102 "name": "BaseBdev2", 00:17:55.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.102 "is_configured": false, 00:17:55.102 "data_offset": 0, 00:17:55.102 "data_size": 0 00:17:55.102 } 00:17:55.102 ] 00:17:55.102 }' 00:17:55.102 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.102 04:06:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.672 [2024-11-18 04:06:52.026436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.672 [2024-11-18 04:06:52.026503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.672 [2024-11-18 04:06:52.038412] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.672 [2024-11-18 04:06:52.038500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.672 [2024-11-18 04:06:52.038525] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.672 [2024-11-18 04:06:52.038549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.672 [2024-11-18 04:06:52.084657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.672 BaseBdev1 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.672 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.672 [ 00:17:55.672 { 00:17:55.672 "name": "BaseBdev1", 00:17:55.672 "aliases": [ 00:17:55.673 "4c2c46e2-67f1-4259-8dd7-1e8820e812fd" 00:17:55.673 ], 00:17:55.673 "product_name": "Malloc disk", 00:17:55.673 "block_size": 4096, 00:17:55.673 "num_blocks": 8192, 00:17:55.673 "uuid": "4c2c46e2-67f1-4259-8dd7-1e8820e812fd", 00:17:55.673 "md_size": 32, 00:17:55.673 "md_interleave": false, 00:17:55.673 "dif_type": 0, 00:17:55.673 "assigned_rate_limits": { 00:17:55.673 "rw_ios_per_sec": 0, 00:17:55.673 "rw_mbytes_per_sec": 0, 00:17:55.673 "r_mbytes_per_sec": 0, 00:17:55.673 "w_mbytes_per_sec": 0 00:17:55.673 }, 00:17:55.673 "claimed": true, 00:17:55.673 "claim_type": "exclusive_write", 00:17:55.673 "zoned": false, 00:17:55.673 "supported_io_types": { 00:17:55.673 "read": true, 00:17:55.673 "write": true, 00:17:55.673 "unmap": true, 00:17:55.673 "flush": true, 00:17:55.673 "reset": true, 00:17:55.673 "nvme_admin": false, 00:17:55.673 "nvme_io": false, 00:17:55.673 "nvme_io_md": false, 00:17:55.673 "write_zeroes": true, 00:17:55.673 "zcopy": true, 00:17:55.673 "get_zone_info": false, 00:17:55.673 "zone_management": false, 00:17:55.673 "zone_append": false, 00:17:55.673 "compare": false, 00:17:55.673 "compare_and_write": false, 00:17:55.673 "abort": true, 00:17:55.673 "seek_hole": false, 00:17:55.673 "seek_data": false, 00:17:55.673 "copy": true, 00:17:55.673 "nvme_iov_md": false 00:17:55.673 }, 00:17:55.673 "memory_domains": [ 00:17:55.673 { 00:17:55.673 "dma_device_id": "system", 00:17:55.673 "dma_device_type": 1 00:17:55.673 }, 00:17:55.673 { 00:17:55.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.673 "dma_device_type": 2 00:17:55.673 } 00:17:55.673 ], 00:17:55.673 "driver_specific": {} 00:17:55.673 } 00:17:55.673 ] 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.673 "name": "Existed_Raid", 00:17:55.673 "uuid": "7ef34b26-1e8a-4698-a081-3117f9220979", 00:17:55.673 "strip_size_kb": 0, 00:17:55.673 "state": "configuring", 00:17:55.673 "raid_level": "raid1", 00:17:55.673 "superblock": true, 00:17:55.673 "num_base_bdevs": 2, 00:17:55.673 "num_base_bdevs_discovered": 1, 00:17:55.673 "num_base_bdevs_operational": 2, 00:17:55.673 "base_bdevs_list": [ 00:17:55.673 { 00:17:55.673 "name": "BaseBdev1", 00:17:55.673 "uuid": "4c2c46e2-67f1-4259-8dd7-1e8820e812fd", 00:17:55.673 "is_configured": true, 00:17:55.673 "data_offset": 256, 00:17:55.673 "data_size": 7936 00:17:55.673 }, 00:17:55.673 { 00:17:55.673 "name": "BaseBdev2", 00:17:55.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.673 "is_configured": false, 00:17:55.673 "data_offset": 0, 00:17:55.673 "data_size": 0 00:17:55.673 } 00:17:55.673 ] 00:17:55.673 }' 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.673 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.242 [2024-11-18 04:06:52.595869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.242 [2024-11-18 04:06:52.595943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.242 [2024-11-18 04:06:52.607888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.242 [2024-11-18 04:06:52.609654] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.242 [2024-11-18 04:06:52.609742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.242 "name": "Existed_Raid", 00:17:56.242 "uuid": "599ebd91-266a-4dff-a42d-75cf7e9d980f", 00:17:56.242 "strip_size_kb": 0, 00:17:56.242 "state": "configuring", 00:17:56.242 "raid_level": "raid1", 00:17:56.242 "superblock": true, 00:17:56.242 "num_base_bdevs": 2, 00:17:56.242 "num_base_bdevs_discovered": 1, 00:17:56.242 "num_base_bdevs_operational": 2, 00:17:56.242 "base_bdevs_list": [ 00:17:56.242 { 00:17:56.242 "name": "BaseBdev1", 00:17:56.242 "uuid": "4c2c46e2-67f1-4259-8dd7-1e8820e812fd", 00:17:56.242 "is_configured": true, 00:17:56.242 "data_offset": 256, 00:17:56.242 "data_size": 7936 00:17:56.242 }, 00:17:56.242 { 00:17:56.242 "name": "BaseBdev2", 00:17:56.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.242 "is_configured": false, 00:17:56.242 "data_offset": 0, 00:17:56.242 "data_size": 0 00:17:56.242 } 00:17:56.242 ] 00:17:56.242 }' 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.242 04:06:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.503 [2024-11-18 04:06:53.083810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.503 [2024-11-18 04:06:53.084140] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:56.503 [2024-11-18 04:06:53.084193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:56.503 [2024-11-18 04:06:53.084338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:56.503 [2024-11-18 04:06:53.084485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:56.503 [2024-11-18 04:06:53.084523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:56.503 [2024-11-18 04:06:53.084639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.503 BaseBdev2 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.503 [ 00:17:56.503 { 00:17:56.503 "name": "BaseBdev2", 00:17:56.503 "aliases": [ 00:17:56.503 "c333678e-d31a-41a2-9bf4-bd6a5b52b0fe" 00:17:56.503 ], 00:17:56.503 "product_name": "Malloc disk", 00:17:56.503 "block_size": 4096, 00:17:56.503 "num_blocks": 8192, 00:17:56.503 "uuid": "c333678e-d31a-41a2-9bf4-bd6a5b52b0fe", 00:17:56.503 "md_size": 32, 00:17:56.503 "md_interleave": false, 00:17:56.503 "dif_type": 0, 00:17:56.503 "assigned_rate_limits": { 00:17:56.503 "rw_ios_per_sec": 0, 00:17:56.503 "rw_mbytes_per_sec": 0, 00:17:56.503 "r_mbytes_per_sec": 0, 00:17:56.503 "w_mbytes_per_sec": 0 00:17:56.503 }, 00:17:56.503 "claimed": true, 00:17:56.503 "claim_type": "exclusive_write", 00:17:56.503 "zoned": false, 00:17:56.503 "supported_io_types": { 00:17:56.503 "read": true, 00:17:56.503 "write": true, 00:17:56.503 "unmap": true, 00:17:56.503 "flush": true, 00:17:56.503 "reset": true, 00:17:56.503 "nvme_admin": false, 00:17:56.503 "nvme_io": false, 00:17:56.503 "nvme_io_md": false, 00:17:56.503 "write_zeroes": true, 00:17:56.503 "zcopy": true, 00:17:56.503 "get_zone_info": false, 00:17:56.503 "zone_management": false, 00:17:56.503 "zone_append": false, 00:17:56.503 "compare": false, 00:17:56.503 "compare_and_write": false, 00:17:56.503 "abort": true, 00:17:56.503 "seek_hole": false, 00:17:56.503 "seek_data": false, 00:17:56.503 "copy": true, 00:17:56.503 "nvme_iov_md": false 00:17:56.503 }, 00:17:56.503 "memory_domains": [ 00:17:56.503 { 00:17:56.503 "dma_device_id": "system", 00:17:56.503 "dma_device_type": 1 00:17:56.503 }, 00:17:56.503 { 00:17:56.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.503 "dma_device_type": 2 00:17:56.503 } 00:17:56.503 ], 00:17:56.503 "driver_specific": {} 00:17:56.503 } 00:17:56.503 ] 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.503 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.764 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.764 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.764 "name": "Existed_Raid", 00:17:56.764 "uuid": "599ebd91-266a-4dff-a42d-75cf7e9d980f", 00:17:56.764 "strip_size_kb": 0, 00:17:56.764 "state": "online", 00:17:56.764 "raid_level": "raid1", 00:17:56.764 "superblock": true, 00:17:56.764 "num_base_bdevs": 2, 00:17:56.764 "num_base_bdevs_discovered": 2, 00:17:56.764 "num_base_bdevs_operational": 2, 00:17:56.764 "base_bdevs_list": [ 00:17:56.764 { 00:17:56.764 "name": "BaseBdev1", 00:17:56.764 "uuid": "4c2c46e2-67f1-4259-8dd7-1e8820e812fd", 00:17:56.764 "is_configured": true, 00:17:56.764 "data_offset": 256, 00:17:56.764 "data_size": 7936 00:17:56.764 }, 00:17:56.764 { 00:17:56.764 "name": "BaseBdev2", 00:17:56.764 "uuid": "c333678e-d31a-41a2-9bf4-bd6a5b52b0fe", 00:17:56.764 "is_configured": true, 00:17:56.764 "data_offset": 256, 00:17:56.764 "data_size": 7936 00:17:56.764 } 00:17:56.764 ] 00:17:56.764 }' 00:17:56.764 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.764 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.024 [2024-11-18 04:06:53.559278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.024 "name": "Existed_Raid", 00:17:57.024 "aliases": [ 00:17:57.024 "599ebd91-266a-4dff-a42d-75cf7e9d980f" 00:17:57.024 ], 00:17:57.024 "product_name": "Raid Volume", 00:17:57.024 "block_size": 4096, 00:17:57.024 "num_blocks": 7936, 00:17:57.024 "uuid": "599ebd91-266a-4dff-a42d-75cf7e9d980f", 00:17:57.024 "md_size": 32, 00:17:57.024 "md_interleave": false, 00:17:57.024 "dif_type": 0, 00:17:57.024 "assigned_rate_limits": { 00:17:57.024 "rw_ios_per_sec": 0, 00:17:57.024 "rw_mbytes_per_sec": 0, 00:17:57.024 "r_mbytes_per_sec": 0, 00:17:57.024 "w_mbytes_per_sec": 0 00:17:57.024 }, 00:17:57.024 "claimed": false, 00:17:57.024 "zoned": false, 00:17:57.024 "supported_io_types": { 00:17:57.024 "read": true, 00:17:57.024 "write": true, 00:17:57.024 "unmap": false, 00:17:57.024 "flush": false, 00:17:57.024 "reset": true, 00:17:57.024 "nvme_admin": false, 00:17:57.024 "nvme_io": false, 00:17:57.024 "nvme_io_md": false, 00:17:57.024 "write_zeroes": true, 00:17:57.024 "zcopy": false, 00:17:57.024 "get_zone_info": false, 00:17:57.024 "zone_management": false, 00:17:57.024 "zone_append": false, 00:17:57.024 "compare": false, 00:17:57.024 "compare_and_write": false, 00:17:57.024 "abort": false, 00:17:57.024 "seek_hole": false, 00:17:57.024 "seek_data": false, 00:17:57.024 "copy": false, 00:17:57.024 "nvme_iov_md": false 00:17:57.024 }, 00:17:57.024 "memory_domains": [ 00:17:57.024 { 00:17:57.024 "dma_device_id": "system", 00:17:57.024 "dma_device_type": 1 00:17:57.024 }, 00:17:57.024 { 00:17:57.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.024 "dma_device_type": 2 00:17:57.024 }, 00:17:57.024 { 00:17:57.024 "dma_device_id": "system", 00:17:57.024 "dma_device_type": 1 00:17:57.024 }, 00:17:57.024 { 00:17:57.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.024 "dma_device_type": 2 00:17:57.024 } 00:17:57.024 ], 00:17:57.024 "driver_specific": { 00:17:57.024 "raid": { 00:17:57.024 "uuid": "599ebd91-266a-4dff-a42d-75cf7e9d980f", 00:17:57.024 "strip_size_kb": 0, 00:17:57.024 "state": "online", 00:17:57.024 "raid_level": "raid1", 00:17:57.024 "superblock": true, 00:17:57.024 "num_base_bdevs": 2, 00:17:57.024 "num_base_bdevs_discovered": 2, 00:17:57.024 "num_base_bdevs_operational": 2, 00:17:57.024 "base_bdevs_list": [ 00:17:57.024 { 00:17:57.024 "name": "BaseBdev1", 00:17:57.024 "uuid": "4c2c46e2-67f1-4259-8dd7-1e8820e812fd", 00:17:57.024 "is_configured": true, 00:17:57.024 "data_offset": 256, 00:17:57.024 "data_size": 7936 00:17:57.024 }, 00:17:57.024 { 00:17:57.024 "name": "BaseBdev2", 00:17:57.024 "uuid": "c333678e-d31a-41a2-9bf4-bd6a5b52b0fe", 00:17:57.024 "is_configured": true, 00:17:57.024 "data_offset": 256, 00:17:57.024 "data_size": 7936 00:17:57.024 } 00:17:57.024 ] 00:17:57.024 } 00:17:57.024 } 00:17:57.024 }' 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:57.024 BaseBdev2' 00:17:57.024 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.285 [2024-11-18 04:06:53.766683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.285 "name": "Existed_Raid", 00:17:57.285 "uuid": "599ebd91-266a-4dff-a42d-75cf7e9d980f", 00:17:57.285 "strip_size_kb": 0, 00:17:57.285 "state": "online", 00:17:57.285 "raid_level": "raid1", 00:17:57.285 "superblock": true, 00:17:57.285 "num_base_bdevs": 2, 00:17:57.285 "num_base_bdevs_discovered": 1, 00:17:57.285 "num_base_bdevs_operational": 1, 00:17:57.285 "base_bdevs_list": [ 00:17:57.285 { 00:17:57.285 "name": null, 00:17:57.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.285 "is_configured": false, 00:17:57.285 "data_offset": 0, 00:17:57.285 "data_size": 7936 00:17:57.285 }, 00:17:57.285 { 00:17:57.285 "name": "BaseBdev2", 00:17:57.285 "uuid": "c333678e-d31a-41a2-9bf4-bd6a5b52b0fe", 00:17:57.285 "is_configured": true, 00:17:57.285 "data_offset": 256, 00:17:57.285 "data_size": 7936 00:17:57.285 } 00:17:57.285 ] 00:17:57.285 }' 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.285 04:06:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.855 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.855 [2024-11-18 04:06:54.412457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:57.855 [2024-11-18 04:06:54.412556] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.116 [2024-11-18 04:06:54.507634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.116 [2024-11-18 04:06:54.507683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.116 [2024-11-18 04:06:54.507695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87084 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87084 ']' 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87084 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87084 00:17:58.116 killing process with pid 87084 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87084' 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87084 00:17:58.116 [2024-11-18 04:06:54.603153] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.116 04:06:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87084 00:17:58.116 [2024-11-18 04:06:54.618631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:59.057 04:06:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:59.057 00:17:59.057 real 0m4.996s 00:17:59.057 user 0m7.163s 00:17:59.057 sys 0m0.967s 00:17:59.057 04:06:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.057 ************************************ 00:17:59.057 END TEST raid_state_function_test_sb_md_separate 00:17:59.057 04:06:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.057 ************************************ 00:17:59.317 04:06:55 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:59.317 04:06:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:59.317 04:06:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.317 04:06:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.317 ************************************ 00:17:59.317 START TEST raid_superblock_test_md_separate 00:17:59.317 ************************************ 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87336 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87336 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87336 ']' 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.317 04:06:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.317 [2024-11-18 04:06:55.827071] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:59.317 [2024-11-18 04:06:55.827216] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87336 ] 00:17:59.577 [2024-11-18 04:06:55.999085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.577 [2024-11-18 04:06:56.106390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.837 [2024-11-18 04:06:56.297480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.837 [2024-11-18 04:06:56.297528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.097 malloc1 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.097 [2024-11-18 04:06:56.679198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.097 [2024-11-18 04:06:56.679251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.097 [2024-11-18 04:06:56.679270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:00.097 [2024-11-18 04:06:56.679279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.097 [2024-11-18 04:06:56.681119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.097 [2024-11-18 04:06:56.681154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.097 pt1 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.097 malloc2 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.097 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.097 [2024-11-18 04:06:56.733298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.097 [2024-11-18 04:06:56.733347] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.097 [2024-11-18 04:06:56.733366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:00.097 [2024-11-18 04:06:56.733374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.097 [2024-11-18 04:06:56.735194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.097 [2024-11-18 04:06:56.735226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.358 pt2 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.358 [2024-11-18 04:06:56.745302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.358 [2024-11-18 04:06:56.747034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.358 [2024-11-18 04:06:56.747190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:00.358 [2024-11-18 04:06:56.747204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:00.358 [2024-11-18 04:06:56.747274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:00.358 [2024-11-18 04:06:56.747401] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:00.358 [2024-11-18 04:06:56.747420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:00.358 [2024-11-18 04:06:56.747533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.358 "name": "raid_bdev1", 00:18:00.358 "uuid": "240125e7-bce8-46ef-97e7-e394be5bdf96", 00:18:00.358 "strip_size_kb": 0, 00:18:00.358 "state": "online", 00:18:00.358 "raid_level": "raid1", 00:18:00.358 "superblock": true, 00:18:00.358 "num_base_bdevs": 2, 00:18:00.358 "num_base_bdevs_discovered": 2, 00:18:00.358 "num_base_bdevs_operational": 2, 00:18:00.358 "base_bdevs_list": [ 00:18:00.358 { 00:18:00.358 "name": "pt1", 00:18:00.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.358 "is_configured": true, 00:18:00.358 "data_offset": 256, 00:18:00.358 "data_size": 7936 00:18:00.358 }, 00:18:00.358 { 00:18:00.358 "name": "pt2", 00:18:00.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.358 "is_configured": true, 00:18:00.358 "data_offset": 256, 00:18:00.358 "data_size": 7936 00:18:00.358 } 00:18:00.358 ] 00:18:00.358 }' 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.358 04:06:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.618 [2024-11-18 04:06:57.188743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.618 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:00.618 "name": "raid_bdev1", 00:18:00.618 "aliases": [ 00:18:00.618 "240125e7-bce8-46ef-97e7-e394be5bdf96" 00:18:00.618 ], 00:18:00.618 "product_name": "Raid Volume", 00:18:00.618 "block_size": 4096, 00:18:00.618 "num_blocks": 7936, 00:18:00.618 "uuid": "240125e7-bce8-46ef-97e7-e394be5bdf96", 00:18:00.618 "md_size": 32, 00:18:00.618 "md_interleave": false, 00:18:00.618 "dif_type": 0, 00:18:00.618 "assigned_rate_limits": { 00:18:00.618 "rw_ios_per_sec": 0, 00:18:00.618 "rw_mbytes_per_sec": 0, 00:18:00.618 "r_mbytes_per_sec": 0, 00:18:00.618 "w_mbytes_per_sec": 0 00:18:00.618 }, 00:18:00.618 "claimed": false, 00:18:00.618 "zoned": false, 00:18:00.618 "supported_io_types": { 00:18:00.618 "read": true, 00:18:00.618 "write": true, 00:18:00.618 "unmap": false, 00:18:00.618 "flush": false, 00:18:00.618 "reset": true, 00:18:00.618 "nvme_admin": false, 00:18:00.618 "nvme_io": false, 00:18:00.618 "nvme_io_md": false, 00:18:00.618 "write_zeroes": true, 00:18:00.618 "zcopy": false, 00:18:00.618 "get_zone_info": false, 00:18:00.618 "zone_management": false, 00:18:00.618 "zone_append": false, 00:18:00.618 "compare": false, 00:18:00.618 "compare_and_write": false, 00:18:00.618 "abort": false, 00:18:00.618 "seek_hole": false, 00:18:00.618 "seek_data": false, 00:18:00.618 "copy": false, 00:18:00.618 "nvme_iov_md": false 00:18:00.619 }, 00:18:00.619 "memory_domains": [ 00:18:00.619 { 00:18:00.619 "dma_device_id": "system", 00:18:00.619 "dma_device_type": 1 00:18:00.619 }, 00:18:00.619 { 00:18:00.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.619 "dma_device_type": 2 00:18:00.619 }, 00:18:00.619 { 00:18:00.619 "dma_device_id": "system", 00:18:00.619 "dma_device_type": 1 00:18:00.619 }, 00:18:00.619 { 00:18:00.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.619 "dma_device_type": 2 00:18:00.619 } 00:18:00.619 ], 00:18:00.619 "driver_specific": { 00:18:00.619 "raid": { 00:18:00.619 "uuid": "240125e7-bce8-46ef-97e7-e394be5bdf96", 00:18:00.619 "strip_size_kb": 0, 00:18:00.619 "state": "online", 00:18:00.619 "raid_level": "raid1", 00:18:00.619 "superblock": true, 00:18:00.619 "num_base_bdevs": 2, 00:18:00.619 "num_base_bdevs_discovered": 2, 00:18:00.619 "num_base_bdevs_operational": 2, 00:18:00.619 "base_bdevs_list": [ 00:18:00.619 { 00:18:00.619 "name": "pt1", 00:18:00.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.619 "is_configured": true, 00:18:00.619 "data_offset": 256, 00:18:00.619 "data_size": 7936 00:18:00.619 }, 00:18:00.619 { 00:18:00.619 "name": "pt2", 00:18:00.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.619 "is_configured": true, 00:18:00.619 "data_offset": 256, 00:18:00.619 "data_size": 7936 00:18:00.619 } 00:18:00.619 ] 00:18:00.619 } 00:18:00.619 } 00:18:00.619 }' 00:18:00.619 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:00.879 pt2' 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.879 [2024-11-18 04:06:57.392408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=240125e7-bce8-46ef-97e7-e394be5bdf96 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 240125e7-bce8-46ef-97e7-e394be5bdf96 ']' 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.879 [2024-11-18 04:06:57.424097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.879 [2024-11-18 04:06:57.424166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.879 [2024-11-18 04:06:57.424247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.879 [2024-11-18 04:06:57.424323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.879 [2024-11-18 04:06:57.424357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.879 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.880 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.140 [2024-11-18 04:06:57.567882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:01.140 [2024-11-18 04:06:57.569699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:01.140 [2024-11-18 04:06:57.569812] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:01.140 [2024-11-18 04:06:57.569932] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:01.140 [2024-11-18 04:06:57.570007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.140 [2024-11-18 04:06:57.570053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:01.140 request: 00:18:01.140 { 00:18:01.140 "name": "raid_bdev1", 00:18:01.140 "raid_level": "raid1", 00:18:01.140 "base_bdevs": [ 00:18:01.140 "malloc1", 00:18:01.140 "malloc2" 00:18:01.140 ], 00:18:01.140 "superblock": false, 00:18:01.140 "method": "bdev_raid_create", 00:18:01.140 "req_id": 1 00:18:01.140 } 00:18:01.140 Got JSON-RPC error response 00:18:01.140 response: 00:18:01.140 { 00:18:01.140 "code": -17, 00:18:01.140 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:01.140 } 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.140 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.140 [2024-11-18 04:06:57.635749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:01.140 [2024-11-18 04:06:57.635842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.141 [2024-11-18 04:06:57.635882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:01.141 [2024-11-18 04:06:57.635915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.141 [2024-11-18 04:06:57.637739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.141 [2024-11-18 04:06:57.637843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:01.141 [2024-11-18 04:06:57.637906] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:01.141 [2024-11-18 04:06:57.637987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:01.141 pt1 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.141 "name": "raid_bdev1", 00:18:01.141 "uuid": "240125e7-bce8-46ef-97e7-e394be5bdf96", 00:18:01.141 "strip_size_kb": 0, 00:18:01.141 "state": "configuring", 00:18:01.141 "raid_level": "raid1", 00:18:01.141 "superblock": true, 00:18:01.141 "num_base_bdevs": 2, 00:18:01.141 "num_base_bdevs_discovered": 1, 00:18:01.141 "num_base_bdevs_operational": 2, 00:18:01.141 "base_bdevs_list": [ 00:18:01.141 { 00:18:01.141 "name": "pt1", 00:18:01.141 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.141 "is_configured": true, 00:18:01.141 "data_offset": 256, 00:18:01.141 "data_size": 7936 00:18:01.141 }, 00:18:01.141 { 00:18:01.141 "name": null, 00:18:01.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.141 "is_configured": false, 00:18:01.141 "data_offset": 256, 00:18:01.141 "data_size": 7936 00:18:01.141 } 00:18:01.141 ] 00:18:01.141 }' 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.141 04:06:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.400 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:01.400 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:01.400 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.400 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.400 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.400 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.660 [2024-11-18 04:06:58.043019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.660 [2024-11-18 04:06:58.043110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.660 [2024-11-18 04:06:58.043143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:01.660 [2024-11-18 04:06:58.043171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.660 [2024-11-18 04:06:58.043357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.660 [2024-11-18 04:06:58.043405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.660 [2024-11-18 04:06:58.043461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:01.660 [2024-11-18 04:06:58.043504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.660 [2024-11-18 04:06:58.043621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:01.660 [2024-11-18 04:06:58.043657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:01.660 [2024-11-18 04:06:58.043732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:01.660 [2024-11-18 04:06:58.043887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:01.660 [2024-11-18 04:06:58.043924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:01.660 [2024-11-18 04:06:58.044035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.660 pt2 00:18:01.660 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.660 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:01.660 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.660 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.660 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.660 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.660 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.660 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.660 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.660 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.661 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.661 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.661 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.661 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.661 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.661 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.661 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.661 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.661 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.661 "name": "raid_bdev1", 00:18:01.661 "uuid": "240125e7-bce8-46ef-97e7-e394be5bdf96", 00:18:01.661 "strip_size_kb": 0, 00:18:01.661 "state": "online", 00:18:01.661 "raid_level": "raid1", 00:18:01.661 "superblock": true, 00:18:01.661 "num_base_bdevs": 2, 00:18:01.661 "num_base_bdevs_discovered": 2, 00:18:01.661 "num_base_bdevs_operational": 2, 00:18:01.661 "base_bdevs_list": [ 00:18:01.661 { 00:18:01.661 "name": "pt1", 00:18:01.661 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.661 "is_configured": true, 00:18:01.661 "data_offset": 256, 00:18:01.661 "data_size": 7936 00:18:01.661 }, 00:18:01.661 { 00:18:01.661 "name": "pt2", 00:18:01.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.661 "is_configured": true, 00:18:01.661 "data_offset": 256, 00:18:01.661 "data_size": 7936 00:18:01.661 } 00:18:01.661 ] 00:18:01.661 }' 00:18:01.661 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.661 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.921 [2024-11-18 04:06:58.478598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.921 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.921 "name": "raid_bdev1", 00:18:01.921 "aliases": [ 00:18:01.921 "240125e7-bce8-46ef-97e7-e394be5bdf96" 00:18:01.921 ], 00:18:01.921 "product_name": "Raid Volume", 00:18:01.921 "block_size": 4096, 00:18:01.921 "num_blocks": 7936, 00:18:01.921 "uuid": "240125e7-bce8-46ef-97e7-e394be5bdf96", 00:18:01.921 "md_size": 32, 00:18:01.921 "md_interleave": false, 00:18:01.921 "dif_type": 0, 00:18:01.921 "assigned_rate_limits": { 00:18:01.921 "rw_ios_per_sec": 0, 00:18:01.921 "rw_mbytes_per_sec": 0, 00:18:01.921 "r_mbytes_per_sec": 0, 00:18:01.921 "w_mbytes_per_sec": 0 00:18:01.921 }, 00:18:01.921 "claimed": false, 00:18:01.921 "zoned": false, 00:18:01.921 "supported_io_types": { 00:18:01.921 "read": true, 00:18:01.921 "write": true, 00:18:01.921 "unmap": false, 00:18:01.921 "flush": false, 00:18:01.921 "reset": true, 00:18:01.921 "nvme_admin": false, 00:18:01.921 "nvme_io": false, 00:18:01.921 "nvme_io_md": false, 00:18:01.921 "write_zeroes": true, 00:18:01.921 "zcopy": false, 00:18:01.921 "get_zone_info": false, 00:18:01.921 "zone_management": false, 00:18:01.921 "zone_append": false, 00:18:01.921 "compare": false, 00:18:01.921 "compare_and_write": false, 00:18:01.921 "abort": false, 00:18:01.921 "seek_hole": false, 00:18:01.921 "seek_data": false, 00:18:01.921 "copy": false, 00:18:01.921 "nvme_iov_md": false 00:18:01.921 }, 00:18:01.921 "memory_domains": [ 00:18:01.921 { 00:18:01.921 "dma_device_id": "system", 00:18:01.921 "dma_device_type": 1 00:18:01.921 }, 00:18:01.921 { 00:18:01.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.921 "dma_device_type": 2 00:18:01.921 }, 00:18:01.921 { 00:18:01.921 "dma_device_id": "system", 00:18:01.921 "dma_device_type": 1 00:18:01.921 }, 00:18:01.921 { 00:18:01.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.921 "dma_device_type": 2 00:18:01.921 } 00:18:01.921 ], 00:18:01.921 "driver_specific": { 00:18:01.921 "raid": { 00:18:01.921 "uuid": "240125e7-bce8-46ef-97e7-e394be5bdf96", 00:18:01.921 "strip_size_kb": 0, 00:18:01.921 "state": "online", 00:18:01.921 "raid_level": "raid1", 00:18:01.921 "superblock": true, 00:18:01.921 "num_base_bdevs": 2, 00:18:01.921 "num_base_bdevs_discovered": 2, 00:18:01.921 "num_base_bdevs_operational": 2, 00:18:01.921 "base_bdevs_list": [ 00:18:01.921 { 00:18:01.921 "name": "pt1", 00:18:01.921 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.921 "is_configured": true, 00:18:01.922 "data_offset": 256, 00:18:01.922 "data_size": 7936 00:18:01.922 }, 00:18:01.922 { 00:18:01.922 "name": "pt2", 00:18:01.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.922 "is_configured": true, 00:18:01.922 "data_offset": 256, 00:18:01.922 "data_size": 7936 00:18:01.922 } 00:18:01.922 ] 00:18:01.922 } 00:18:01.922 } 00:18:01.922 }' 00:18:01.922 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:02.182 pt2' 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:02.182 [2024-11-18 04:06:58.726178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 240125e7-bce8-46ef-97e7-e394be5bdf96 '!=' 240125e7-bce8-46ef-97e7-e394be5bdf96 ']' 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.182 [2024-11-18 04:06:58.777899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.182 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.442 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.442 "name": "raid_bdev1", 00:18:02.442 "uuid": "240125e7-bce8-46ef-97e7-e394be5bdf96", 00:18:02.442 "strip_size_kb": 0, 00:18:02.442 "state": "online", 00:18:02.442 "raid_level": "raid1", 00:18:02.442 "superblock": true, 00:18:02.442 "num_base_bdevs": 2, 00:18:02.442 "num_base_bdevs_discovered": 1, 00:18:02.442 "num_base_bdevs_operational": 1, 00:18:02.442 "base_bdevs_list": [ 00:18:02.442 { 00:18:02.442 "name": null, 00:18:02.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.442 "is_configured": false, 00:18:02.442 "data_offset": 0, 00:18:02.442 "data_size": 7936 00:18:02.442 }, 00:18:02.442 { 00:18:02.442 "name": "pt2", 00:18:02.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.442 "is_configured": true, 00:18:02.442 "data_offset": 256, 00:18:02.442 "data_size": 7936 00:18:02.442 } 00:18:02.442 ] 00:18:02.442 }' 00:18:02.442 04:06:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.442 04:06:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.704 [2024-11-18 04:06:59.257021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.704 [2024-11-18 04:06:59.257078] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.704 [2024-11-18 04:06:59.257131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.704 [2024-11-18 04:06:59.257166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.704 [2024-11-18 04:06:59.257175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.704 [2024-11-18 04:06:59.328921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.704 [2024-11-18 04:06:59.329004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.704 [2024-11-18 04:06:59.329036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:02.704 [2024-11-18 04:06:59.329064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.704 [2024-11-18 04:06:59.330909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.704 [2024-11-18 04:06:59.330984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.704 [2024-11-18 04:06:59.331042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:02.704 [2024-11-18 04:06:59.331119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.704 [2024-11-18 04:06:59.331234] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:02.704 [2024-11-18 04:06:59.331272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:02.704 [2024-11-18 04:06:59.331356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:02.704 [2024-11-18 04:06:59.331488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:02.704 [2024-11-18 04:06:59.331523] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:02.704 [2024-11-18 04:06:59.331653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.704 pt2 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.704 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.968 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.968 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.968 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.968 "name": "raid_bdev1", 00:18:02.968 "uuid": "240125e7-bce8-46ef-97e7-e394be5bdf96", 00:18:02.968 "strip_size_kb": 0, 00:18:02.968 "state": "online", 00:18:02.968 "raid_level": "raid1", 00:18:02.968 "superblock": true, 00:18:02.968 "num_base_bdevs": 2, 00:18:02.968 "num_base_bdevs_discovered": 1, 00:18:02.968 "num_base_bdevs_operational": 1, 00:18:02.968 "base_bdevs_list": [ 00:18:02.968 { 00:18:02.968 "name": null, 00:18:02.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.968 "is_configured": false, 00:18:02.968 "data_offset": 256, 00:18:02.968 "data_size": 7936 00:18:02.968 }, 00:18:02.968 { 00:18:02.968 "name": "pt2", 00:18:02.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.968 "is_configured": true, 00:18:02.968 "data_offset": 256, 00:18:02.968 "data_size": 7936 00:18:02.968 } 00:18:02.968 ] 00:18:02.968 }' 00:18:02.968 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.968 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.227 [2024-11-18 04:06:59.776117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.227 [2024-11-18 04:06:59.776173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.227 [2024-11-18 04:06:59.776249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.227 [2024-11-18 04:06:59.776299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.227 [2024-11-18 04:06:59.776328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:03.227 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.228 [2024-11-18 04:06:59.840033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.228 [2024-11-18 04:06:59.840117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.228 [2024-11-18 04:06:59.840149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:03.228 [2024-11-18 04:06:59.840175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.228 [2024-11-18 04:06:59.842031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.228 [2024-11-18 04:06:59.842096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:03.228 [2024-11-18 04:06:59.842157] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:03.228 [2024-11-18 04:06:59.842228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:03.228 [2024-11-18 04:06:59.842350] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:03.228 [2024-11-18 04:06:59.842395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.228 [2024-11-18 04:06:59.842427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:03.228 [2024-11-18 04:06:59.842521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.228 [2024-11-18 04:06:59.842610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:03.228 [2024-11-18 04:06:59.842644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:03.228 [2024-11-18 04:06:59.842721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:03.228 [2024-11-18 04:06:59.842867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:03.228 [2024-11-18 04:06:59.842904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:03.228 [2024-11-18 04:06:59.843024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.228 pt1 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.228 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.487 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.487 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.487 "name": "raid_bdev1", 00:18:03.488 "uuid": "240125e7-bce8-46ef-97e7-e394be5bdf96", 00:18:03.488 "strip_size_kb": 0, 00:18:03.488 "state": "online", 00:18:03.488 "raid_level": "raid1", 00:18:03.488 "superblock": true, 00:18:03.488 "num_base_bdevs": 2, 00:18:03.488 "num_base_bdevs_discovered": 1, 00:18:03.488 "num_base_bdevs_operational": 1, 00:18:03.488 "base_bdevs_list": [ 00:18:03.488 { 00:18:03.488 "name": null, 00:18:03.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.488 "is_configured": false, 00:18:03.488 "data_offset": 256, 00:18:03.488 "data_size": 7936 00:18:03.488 }, 00:18:03.488 { 00:18:03.488 "name": "pt2", 00:18:03.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.488 "is_configured": true, 00:18:03.488 "data_offset": 256, 00:18:03.488 "data_size": 7936 00:18:03.488 } 00:18:03.488 ] 00:18:03.488 }' 00:18:03.488 04:06:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.488 04:06:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.747 04:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:03.747 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.747 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.747 04:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:03.747 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.747 04:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:03.747 04:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:03.747 04:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:03.747 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.747 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.747 [2024-11-18 04:07:00.347327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:03.747 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 240125e7-bce8-46ef-97e7-e394be5bdf96 '!=' 240125e7-bce8-46ef-97e7-e394be5bdf96 ']' 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87336 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87336 ']' 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87336 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87336 00:18:04.007 killing process with pid 87336 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87336' 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87336 00:18:04.007 [2024-11-18 04:07:00.431519] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.007 [2024-11-18 04:07:00.431577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.007 [2024-11-18 04:07:00.431610] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.007 [2024-11-18 04:07:00.431624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:04.007 04:07:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87336 00:18:04.008 [2024-11-18 04:07:00.634754] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:05.391 04:07:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:05.391 00:18:05.391 real 0m5.929s 00:18:05.391 user 0m8.983s 00:18:05.391 sys 0m1.131s 00:18:05.391 ************************************ 00:18:05.391 END TEST raid_superblock_test_md_separate 00:18:05.391 ************************************ 00:18:05.391 04:07:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.391 04:07:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.391 04:07:01 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:05.391 04:07:01 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:05.391 04:07:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:05.391 04:07:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.391 04:07:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.391 ************************************ 00:18:05.391 START TEST raid_rebuild_test_sb_md_separate 00:18:05.391 ************************************ 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87659 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87659 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87659 ']' 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.391 04:07:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.391 [2024-11-18 04:07:01.852081] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:05.391 [2024-11-18 04:07:01.852269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:05.391 Zero copy mechanism will not be used. 00:18:05.391 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87659 ] 00:18:05.391 [2024-11-18 04:07:02.030348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.652 [2024-11-18 04:07:02.136032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.912 [2024-11-18 04:07:02.325228] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.912 [2024-11-18 04:07:02.325309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.173 BaseBdev1_malloc 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.173 [2024-11-18 04:07:02.688629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:06.173 [2024-11-18 04:07:02.688775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.173 [2024-11-18 04:07:02.688814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:06.173 [2024-11-18 04:07:02.688859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.173 [2024-11-18 04:07:02.690680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.173 [2024-11-18 04:07:02.690763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:06.173 BaseBdev1 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.173 BaseBdev2_malloc 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.173 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.173 [2024-11-18 04:07:02.737614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:06.173 [2024-11-18 04:07:02.737741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.173 [2024-11-18 04:07:02.737763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:06.173 [2024-11-18 04:07:02.737774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.173 [2024-11-18 04:07:02.739548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.174 [2024-11-18 04:07:02.739585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:06.174 BaseBdev2 00:18:06.174 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.174 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:06.174 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.174 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.435 spare_malloc 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.435 spare_delay 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.435 [2024-11-18 04:07:02.830948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.435 [2024-11-18 04:07:02.831084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.435 [2024-11-18 04:07:02.831107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:06.435 [2024-11-18 04:07:02.831118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.435 [2024-11-18 04:07:02.832955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.435 [2024-11-18 04:07:02.832993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.435 spare 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.435 [2024-11-18 04:07:02.842960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.435 [2024-11-18 04:07:02.844548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.435 [2024-11-18 04:07:02.844712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:06.435 [2024-11-18 04:07:02.844726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.435 [2024-11-18 04:07:02.844786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:06.435 [2024-11-18 04:07:02.844909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:06.435 [2024-11-18 04:07:02.844918] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:06.435 [2024-11-18 04:07:02.845036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.435 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.436 "name": "raid_bdev1", 00:18:06.436 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:06.436 "strip_size_kb": 0, 00:18:06.436 "state": "online", 00:18:06.436 "raid_level": "raid1", 00:18:06.436 "superblock": true, 00:18:06.436 "num_base_bdevs": 2, 00:18:06.436 "num_base_bdevs_discovered": 2, 00:18:06.436 "num_base_bdevs_operational": 2, 00:18:06.436 "base_bdevs_list": [ 00:18:06.436 { 00:18:06.436 "name": "BaseBdev1", 00:18:06.436 "uuid": "a2098d12-2be8-5565-a1af-ed8191c5266b", 00:18:06.436 "is_configured": true, 00:18:06.436 "data_offset": 256, 00:18:06.436 "data_size": 7936 00:18:06.436 }, 00:18:06.436 { 00:18:06.436 "name": "BaseBdev2", 00:18:06.436 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:06.436 "is_configured": true, 00:18:06.436 "data_offset": 256, 00:18:06.436 "data_size": 7936 00:18:06.436 } 00:18:06.436 ] 00:18:06.436 }' 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.436 04:07:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.696 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.696 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:06.696 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.696 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.696 [2024-11-18 04:07:03.310382] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.696 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:06.956 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:06.956 [2024-11-18 04:07:03.553771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:06.956 /dev/nbd0 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.217 1+0 records in 00:18:07.217 1+0 records out 00:18:07.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422815 s, 9.7 MB/s 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:07.217 04:07:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:07.787 7936+0 records in 00:18:07.787 7936+0 records out 00:18:07.787 32505856 bytes (33 MB, 31 MiB) copied, 0.620056 s, 52.4 MB/s 00:18:07.787 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:07.787 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.787 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:07.787 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:07.787 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:07.787 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:07.787 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:08.046 [2024-11-18 04:07:04.461033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.046 [2024-11-18 04:07:04.477094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.046 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.046 "name": "raid_bdev1", 00:18:08.046 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:08.047 "strip_size_kb": 0, 00:18:08.047 "state": "online", 00:18:08.047 "raid_level": "raid1", 00:18:08.047 "superblock": true, 00:18:08.047 "num_base_bdevs": 2, 00:18:08.047 "num_base_bdevs_discovered": 1, 00:18:08.047 "num_base_bdevs_operational": 1, 00:18:08.047 "base_bdevs_list": [ 00:18:08.047 { 00:18:08.047 "name": null, 00:18:08.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.047 "is_configured": false, 00:18:08.047 "data_offset": 0, 00:18:08.047 "data_size": 7936 00:18:08.047 }, 00:18:08.047 { 00:18:08.047 "name": "BaseBdev2", 00:18:08.047 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:08.047 "is_configured": true, 00:18:08.047 "data_offset": 256, 00:18:08.047 "data_size": 7936 00:18:08.047 } 00:18:08.047 ] 00:18:08.047 }' 00:18:08.047 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.047 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.306 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:08.306 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.306 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.306 [2024-11-18 04:07:04.940278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.566 [2024-11-18 04:07:04.954152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:08.566 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.566 04:07:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:08.566 [2024-11-18 04:07:04.955922] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.506 04:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.506 04:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.506 04:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.506 04:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.506 04:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.506 04:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.506 04:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.506 04:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.506 04:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.506 04:07:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.506 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.506 "name": "raid_bdev1", 00:18:09.506 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:09.506 "strip_size_kb": 0, 00:18:09.506 "state": "online", 00:18:09.506 "raid_level": "raid1", 00:18:09.506 "superblock": true, 00:18:09.506 "num_base_bdevs": 2, 00:18:09.506 "num_base_bdevs_discovered": 2, 00:18:09.506 "num_base_bdevs_operational": 2, 00:18:09.506 "process": { 00:18:09.506 "type": "rebuild", 00:18:09.506 "target": "spare", 00:18:09.506 "progress": { 00:18:09.506 "blocks": 2560, 00:18:09.506 "percent": 32 00:18:09.506 } 00:18:09.506 }, 00:18:09.506 "base_bdevs_list": [ 00:18:09.506 { 00:18:09.506 "name": "spare", 00:18:09.506 "uuid": "2abed6f4-02d6-582d-9503-7119aff49c6d", 00:18:09.506 "is_configured": true, 00:18:09.506 "data_offset": 256, 00:18:09.506 "data_size": 7936 00:18:09.506 }, 00:18:09.506 { 00:18:09.506 "name": "BaseBdev2", 00:18:09.506 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:09.506 "is_configured": true, 00:18:09.506 "data_offset": 256, 00:18:09.506 "data_size": 7936 00:18:09.506 } 00:18:09.506 ] 00:18:09.506 }' 00:18:09.506 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.506 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.506 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.506 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.506 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:09.506 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.506 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.506 [2024-11-18 04:07:06.116606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.765 [2024-11-18 04:07:06.160514] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:09.765 [2024-11-18 04:07:06.160571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.765 [2024-11-18 04:07:06.160585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.765 [2024-11-18 04:07:06.160595] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.765 "name": "raid_bdev1", 00:18:09.765 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:09.765 "strip_size_kb": 0, 00:18:09.765 "state": "online", 00:18:09.765 "raid_level": "raid1", 00:18:09.765 "superblock": true, 00:18:09.765 "num_base_bdevs": 2, 00:18:09.765 "num_base_bdevs_discovered": 1, 00:18:09.765 "num_base_bdevs_operational": 1, 00:18:09.765 "base_bdevs_list": [ 00:18:09.765 { 00:18:09.765 "name": null, 00:18:09.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.765 "is_configured": false, 00:18:09.765 "data_offset": 0, 00:18:09.765 "data_size": 7936 00:18:09.765 }, 00:18:09.765 { 00:18:09.765 "name": "BaseBdev2", 00:18:09.765 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:09.765 "is_configured": true, 00:18:09.765 "data_offset": 256, 00:18:09.765 "data_size": 7936 00:18:09.765 } 00:18:09.765 ] 00:18:09.765 }' 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.765 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.025 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.025 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.025 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.025 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.025 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.025 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.025 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.025 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.025 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.025 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.025 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.025 "name": "raid_bdev1", 00:18:10.025 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:10.025 "strip_size_kb": 0, 00:18:10.025 "state": "online", 00:18:10.025 "raid_level": "raid1", 00:18:10.025 "superblock": true, 00:18:10.025 "num_base_bdevs": 2, 00:18:10.025 "num_base_bdevs_discovered": 1, 00:18:10.025 "num_base_bdevs_operational": 1, 00:18:10.025 "base_bdevs_list": [ 00:18:10.025 { 00:18:10.025 "name": null, 00:18:10.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.025 "is_configured": false, 00:18:10.025 "data_offset": 0, 00:18:10.025 "data_size": 7936 00:18:10.025 }, 00:18:10.025 { 00:18:10.025 "name": "BaseBdev2", 00:18:10.025 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:10.025 "is_configured": true, 00:18:10.025 "data_offset": 256, 00:18:10.025 "data_size": 7936 00:18:10.025 } 00:18:10.025 ] 00:18:10.025 }' 00:18:10.285 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.285 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.285 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.285 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.285 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.285 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.285 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.286 [2024-11-18 04:07:06.766607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.286 [2024-11-18 04:07:06.780115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:10.286 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.286 04:07:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:10.286 [2024-11-18 04:07:06.781915] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.260 "name": "raid_bdev1", 00:18:11.260 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:11.260 "strip_size_kb": 0, 00:18:11.260 "state": "online", 00:18:11.260 "raid_level": "raid1", 00:18:11.260 "superblock": true, 00:18:11.260 "num_base_bdevs": 2, 00:18:11.260 "num_base_bdevs_discovered": 2, 00:18:11.260 "num_base_bdevs_operational": 2, 00:18:11.260 "process": { 00:18:11.260 "type": "rebuild", 00:18:11.260 "target": "spare", 00:18:11.260 "progress": { 00:18:11.260 "blocks": 2560, 00:18:11.260 "percent": 32 00:18:11.260 } 00:18:11.260 }, 00:18:11.260 "base_bdevs_list": [ 00:18:11.260 { 00:18:11.260 "name": "spare", 00:18:11.260 "uuid": "2abed6f4-02d6-582d-9503-7119aff49c6d", 00:18:11.260 "is_configured": true, 00:18:11.260 "data_offset": 256, 00:18:11.260 "data_size": 7936 00:18:11.260 }, 00:18:11.260 { 00:18:11.260 "name": "BaseBdev2", 00:18:11.260 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:11.260 "is_configured": true, 00:18:11.260 "data_offset": 256, 00:18:11.260 "data_size": 7936 00:18:11.260 } 00:18:11.260 ] 00:18:11.260 }' 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.260 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:11.537 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=701 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.537 "name": "raid_bdev1", 00:18:11.537 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:11.537 "strip_size_kb": 0, 00:18:11.537 "state": "online", 00:18:11.537 "raid_level": "raid1", 00:18:11.537 "superblock": true, 00:18:11.537 "num_base_bdevs": 2, 00:18:11.537 "num_base_bdevs_discovered": 2, 00:18:11.537 "num_base_bdevs_operational": 2, 00:18:11.537 "process": { 00:18:11.537 "type": "rebuild", 00:18:11.537 "target": "spare", 00:18:11.537 "progress": { 00:18:11.537 "blocks": 2816, 00:18:11.537 "percent": 35 00:18:11.537 } 00:18:11.537 }, 00:18:11.537 "base_bdevs_list": [ 00:18:11.537 { 00:18:11.537 "name": "spare", 00:18:11.537 "uuid": "2abed6f4-02d6-582d-9503-7119aff49c6d", 00:18:11.537 "is_configured": true, 00:18:11.537 "data_offset": 256, 00:18:11.537 "data_size": 7936 00:18:11.537 }, 00:18:11.537 { 00:18:11.537 "name": "BaseBdev2", 00:18:11.537 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:11.537 "is_configured": true, 00:18:11.537 "data_offset": 256, 00:18:11.537 "data_size": 7936 00:18:11.537 } 00:18:11.537 ] 00:18:11.537 }' 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.537 04:07:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.537 04:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.537 04:07:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.476 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.476 "name": "raid_bdev1", 00:18:12.476 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:12.476 "strip_size_kb": 0, 00:18:12.476 "state": "online", 00:18:12.476 "raid_level": "raid1", 00:18:12.476 "superblock": true, 00:18:12.476 "num_base_bdevs": 2, 00:18:12.476 "num_base_bdevs_discovered": 2, 00:18:12.476 "num_base_bdevs_operational": 2, 00:18:12.476 "process": { 00:18:12.476 "type": "rebuild", 00:18:12.476 "target": "spare", 00:18:12.476 "progress": { 00:18:12.476 "blocks": 5632, 00:18:12.476 "percent": 70 00:18:12.476 } 00:18:12.476 }, 00:18:12.476 "base_bdevs_list": [ 00:18:12.476 { 00:18:12.476 "name": "spare", 00:18:12.476 "uuid": "2abed6f4-02d6-582d-9503-7119aff49c6d", 00:18:12.476 "is_configured": true, 00:18:12.476 "data_offset": 256, 00:18:12.476 "data_size": 7936 00:18:12.476 }, 00:18:12.476 { 00:18:12.476 "name": "BaseBdev2", 00:18:12.476 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:12.476 "is_configured": true, 00:18:12.477 "data_offset": 256, 00:18:12.477 "data_size": 7936 00:18:12.477 } 00:18:12.477 ] 00:18:12.477 }' 00:18:12.477 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.735 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.735 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.735 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.735 04:07:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.305 [2024-11-18 04:07:09.893145] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:13.305 [2024-11-18 04:07:09.893213] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:13.305 [2024-11-18 04:07:09.893324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.566 "name": "raid_bdev1", 00:18:13.566 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:13.566 "strip_size_kb": 0, 00:18:13.566 "state": "online", 00:18:13.566 "raid_level": "raid1", 00:18:13.566 "superblock": true, 00:18:13.566 "num_base_bdevs": 2, 00:18:13.566 "num_base_bdevs_discovered": 2, 00:18:13.566 "num_base_bdevs_operational": 2, 00:18:13.566 "base_bdevs_list": [ 00:18:13.566 { 00:18:13.566 "name": "spare", 00:18:13.566 "uuid": "2abed6f4-02d6-582d-9503-7119aff49c6d", 00:18:13.566 "is_configured": true, 00:18:13.566 "data_offset": 256, 00:18:13.566 "data_size": 7936 00:18:13.566 }, 00:18:13.566 { 00:18:13.566 "name": "BaseBdev2", 00:18:13.566 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:13.566 "is_configured": true, 00:18:13.566 "data_offset": 256, 00:18:13.566 "data_size": 7936 00:18:13.566 } 00:18:13.566 ] 00:18:13.566 }' 00:18:13.566 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.825 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.825 "name": "raid_bdev1", 00:18:13.825 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:13.825 "strip_size_kb": 0, 00:18:13.825 "state": "online", 00:18:13.825 "raid_level": "raid1", 00:18:13.825 "superblock": true, 00:18:13.825 "num_base_bdevs": 2, 00:18:13.825 "num_base_bdevs_discovered": 2, 00:18:13.825 "num_base_bdevs_operational": 2, 00:18:13.825 "base_bdevs_list": [ 00:18:13.825 { 00:18:13.825 "name": "spare", 00:18:13.825 "uuid": "2abed6f4-02d6-582d-9503-7119aff49c6d", 00:18:13.825 "is_configured": true, 00:18:13.825 "data_offset": 256, 00:18:13.825 "data_size": 7936 00:18:13.825 }, 00:18:13.825 { 00:18:13.825 "name": "BaseBdev2", 00:18:13.825 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:13.825 "is_configured": true, 00:18:13.825 "data_offset": 256, 00:18:13.825 "data_size": 7936 00:18:13.826 } 00:18:13.826 ] 00:18:13.826 }' 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.826 "name": "raid_bdev1", 00:18:13.826 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:13.826 "strip_size_kb": 0, 00:18:13.826 "state": "online", 00:18:13.826 "raid_level": "raid1", 00:18:13.826 "superblock": true, 00:18:13.826 "num_base_bdevs": 2, 00:18:13.826 "num_base_bdevs_discovered": 2, 00:18:13.826 "num_base_bdevs_operational": 2, 00:18:13.826 "base_bdevs_list": [ 00:18:13.826 { 00:18:13.826 "name": "spare", 00:18:13.826 "uuid": "2abed6f4-02d6-582d-9503-7119aff49c6d", 00:18:13.826 "is_configured": true, 00:18:13.826 "data_offset": 256, 00:18:13.826 "data_size": 7936 00:18:13.826 }, 00:18:13.826 { 00:18:13.826 "name": "BaseBdev2", 00:18:13.826 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:13.826 "is_configured": true, 00:18:13.826 "data_offset": 256, 00:18:13.826 "data_size": 7936 00:18:13.826 } 00:18:13.826 ] 00:18:13.826 }' 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.826 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.396 [2024-11-18 04:07:10.862527] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.396 [2024-11-18 04:07:10.862559] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.396 [2024-11-18 04:07:10.862630] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.396 [2024-11-18 04:07:10.862685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.396 [2024-11-18 04:07:10.862695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:14.396 04:07:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:14.656 /dev/nbd0 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.656 1+0 records in 00:18:14.656 1+0 records out 00:18:14.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041168 s, 9.9 MB/s 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:14.656 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:14.657 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:14.917 /dev/nbd1 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:14.917 1+0 records in 00:18:14.917 1+0 records out 00:18:14.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441627 s, 9.3 MB/s 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:14.917 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:15.177 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:15.438 04:07:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.438 [2024-11-18 04:07:12.028912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:15.438 [2024-11-18 04:07:12.028982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.438 [2024-11-18 04:07:12.029004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:15.438 [2024-11-18 04:07:12.029013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.438 [2024-11-18 04:07:12.030884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.438 [2024-11-18 04:07:12.030916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:15.438 [2024-11-18 04:07:12.030974] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:15.438 [2024-11-18 04:07:12.031028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.438 [2024-11-18 04:07:12.031174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.438 spare 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.438 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.697 [2024-11-18 04:07:12.131054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:15.697 [2024-11-18 04:07:12.131085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:15.697 [2024-11-18 04:07:12.131193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:15.697 [2024-11-18 04:07:12.131311] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:15.697 [2024-11-18 04:07:12.131319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:15.697 [2024-11-18 04:07:12.131431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.697 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.697 "name": "raid_bdev1", 00:18:15.697 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:15.697 "strip_size_kb": 0, 00:18:15.697 "state": "online", 00:18:15.697 "raid_level": "raid1", 00:18:15.697 "superblock": true, 00:18:15.697 "num_base_bdevs": 2, 00:18:15.697 "num_base_bdevs_discovered": 2, 00:18:15.697 "num_base_bdevs_operational": 2, 00:18:15.697 "base_bdevs_list": [ 00:18:15.697 { 00:18:15.697 "name": "spare", 00:18:15.698 "uuid": "2abed6f4-02d6-582d-9503-7119aff49c6d", 00:18:15.698 "is_configured": true, 00:18:15.698 "data_offset": 256, 00:18:15.698 "data_size": 7936 00:18:15.698 }, 00:18:15.698 { 00:18:15.698 "name": "BaseBdev2", 00:18:15.698 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:15.698 "is_configured": true, 00:18:15.698 "data_offset": 256, 00:18:15.698 "data_size": 7936 00:18:15.698 } 00:18:15.698 ] 00:18:15.698 }' 00:18:15.698 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.698 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.266 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.266 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.266 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.266 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.266 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.266 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.266 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.266 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.266 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.266 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.266 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.266 "name": "raid_bdev1", 00:18:16.266 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:16.266 "strip_size_kb": 0, 00:18:16.266 "state": "online", 00:18:16.266 "raid_level": "raid1", 00:18:16.266 "superblock": true, 00:18:16.266 "num_base_bdevs": 2, 00:18:16.266 "num_base_bdevs_discovered": 2, 00:18:16.267 "num_base_bdevs_operational": 2, 00:18:16.267 "base_bdevs_list": [ 00:18:16.267 { 00:18:16.267 "name": "spare", 00:18:16.267 "uuid": "2abed6f4-02d6-582d-9503-7119aff49c6d", 00:18:16.267 "is_configured": true, 00:18:16.267 "data_offset": 256, 00:18:16.267 "data_size": 7936 00:18:16.267 }, 00:18:16.267 { 00:18:16.267 "name": "BaseBdev2", 00:18:16.267 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:16.267 "is_configured": true, 00:18:16.267 "data_offset": 256, 00:18:16.267 "data_size": 7936 00:18:16.267 } 00:18:16.267 ] 00:18:16.267 }' 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.267 [2024-11-18 04:07:12.787786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.267 "name": "raid_bdev1", 00:18:16.267 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:16.267 "strip_size_kb": 0, 00:18:16.267 "state": "online", 00:18:16.267 "raid_level": "raid1", 00:18:16.267 "superblock": true, 00:18:16.267 "num_base_bdevs": 2, 00:18:16.267 "num_base_bdevs_discovered": 1, 00:18:16.267 "num_base_bdevs_operational": 1, 00:18:16.267 "base_bdevs_list": [ 00:18:16.267 { 00:18:16.267 "name": null, 00:18:16.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.267 "is_configured": false, 00:18:16.267 "data_offset": 0, 00:18:16.267 "data_size": 7936 00:18:16.267 }, 00:18:16.267 { 00:18:16.267 "name": "BaseBdev2", 00:18:16.267 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:16.267 "is_configured": true, 00:18:16.267 "data_offset": 256, 00:18:16.267 "data_size": 7936 00:18:16.267 } 00:18:16.267 ] 00:18:16.267 }' 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.267 04:07:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.837 04:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:16.837 04:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.837 04:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.837 [2024-11-18 04:07:13.223033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.837 [2024-11-18 04:07:13.223184] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:16.837 [2024-11-18 04:07:13.223204] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:16.837 [2024-11-18 04:07:13.223255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.837 [2024-11-18 04:07:13.236575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:16.837 04:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.837 04:07:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:16.837 [2024-11-18 04:07:13.238284] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:17.776 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.776 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:17.776 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:17.776 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:17.776 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:17.776 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.776 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.776 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.776 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.776 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.776 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:17.776 "name": "raid_bdev1", 00:18:17.776 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:17.776 "strip_size_kb": 0, 00:18:17.776 "state": "online", 00:18:17.776 "raid_level": "raid1", 00:18:17.776 "superblock": true, 00:18:17.776 "num_base_bdevs": 2, 00:18:17.776 "num_base_bdevs_discovered": 2, 00:18:17.776 "num_base_bdevs_operational": 2, 00:18:17.776 "process": { 00:18:17.776 "type": "rebuild", 00:18:17.776 "target": "spare", 00:18:17.776 "progress": { 00:18:17.776 "blocks": 2560, 00:18:17.776 "percent": 32 00:18:17.776 } 00:18:17.776 }, 00:18:17.776 "base_bdevs_list": [ 00:18:17.776 { 00:18:17.776 "name": "spare", 00:18:17.776 "uuid": "2abed6f4-02d6-582d-9503-7119aff49c6d", 00:18:17.776 "is_configured": true, 00:18:17.776 "data_offset": 256, 00:18:17.776 "data_size": 7936 00:18:17.776 }, 00:18:17.776 { 00:18:17.776 "name": "BaseBdev2", 00:18:17.776 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:17.777 "is_configured": true, 00:18:17.777 "data_offset": 256, 00:18:17.777 "data_size": 7936 00:18:17.777 } 00:18:17.777 ] 00:18:17.777 }' 00:18:17.777 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:17.777 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.777 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:17.777 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.777 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:17.777 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.777 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.777 [2024-11-18 04:07:14.398366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.037 [2024-11-18 04:07:14.442734] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:18.037 [2024-11-18 04:07:14.442809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.037 [2024-11-18 04:07:14.442822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.037 [2024-11-18 04:07:14.442852] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.037 "name": "raid_bdev1", 00:18:18.037 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:18.037 "strip_size_kb": 0, 00:18:18.037 "state": "online", 00:18:18.037 "raid_level": "raid1", 00:18:18.037 "superblock": true, 00:18:18.037 "num_base_bdevs": 2, 00:18:18.037 "num_base_bdevs_discovered": 1, 00:18:18.037 "num_base_bdevs_operational": 1, 00:18:18.037 "base_bdevs_list": [ 00:18:18.037 { 00:18:18.037 "name": null, 00:18:18.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.037 "is_configured": false, 00:18:18.037 "data_offset": 0, 00:18:18.037 "data_size": 7936 00:18:18.037 }, 00:18:18.037 { 00:18:18.037 "name": "BaseBdev2", 00:18:18.037 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:18.037 "is_configured": true, 00:18:18.037 "data_offset": 256, 00:18:18.037 "data_size": 7936 00:18:18.037 } 00:18:18.037 ] 00:18:18.037 }' 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.037 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.297 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:18.297 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.297 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.297 [2024-11-18 04:07:14.928868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:18.297 [2024-11-18 04:07:14.928921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.297 [2024-11-18 04:07:14.928942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:18.297 [2024-11-18 04:07:14.928954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.297 [2024-11-18 04:07:14.929183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.297 [2024-11-18 04:07:14.929208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:18.297 [2024-11-18 04:07:14.929257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:18.297 [2024-11-18 04:07:14.929269] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:18.297 [2024-11-18 04:07:14.929277] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:18.297 [2024-11-18 04:07:14.929303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.557 [2024-11-18 04:07:14.942097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:18.557 spare 00:18:18.557 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.557 04:07:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:18.557 [2024-11-18 04:07:14.943840] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:19.499 04:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.499 04:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.499 04:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.499 04:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.499 04:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.499 04:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.499 04:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.499 04:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.499 04:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.499 04:07:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.499 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.499 "name": "raid_bdev1", 00:18:19.499 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:19.499 "strip_size_kb": 0, 00:18:19.499 "state": "online", 00:18:19.499 "raid_level": "raid1", 00:18:19.499 "superblock": true, 00:18:19.499 "num_base_bdevs": 2, 00:18:19.499 "num_base_bdevs_discovered": 2, 00:18:19.499 "num_base_bdevs_operational": 2, 00:18:19.499 "process": { 00:18:19.499 "type": "rebuild", 00:18:19.499 "target": "spare", 00:18:19.499 "progress": { 00:18:19.499 "blocks": 2560, 00:18:19.499 "percent": 32 00:18:19.499 } 00:18:19.499 }, 00:18:19.499 "base_bdevs_list": [ 00:18:19.500 { 00:18:19.500 "name": "spare", 00:18:19.500 "uuid": "2abed6f4-02d6-582d-9503-7119aff49c6d", 00:18:19.500 "is_configured": true, 00:18:19.500 "data_offset": 256, 00:18:19.500 "data_size": 7936 00:18:19.500 }, 00:18:19.500 { 00:18:19.500 "name": "BaseBdev2", 00:18:19.500 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:19.500 "is_configured": true, 00:18:19.500 "data_offset": 256, 00:18:19.500 "data_size": 7936 00:18:19.500 } 00:18:19.500 ] 00:18:19.500 }' 00:18:19.500 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.500 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.500 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.500 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.500 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:19.500 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.500 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.500 [2024-11-18 04:07:16.103773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.761 [2024-11-18 04:07:16.148179] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:19.761 [2024-11-18 04:07:16.148232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.761 [2024-11-18 04:07:16.148264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.761 [2024-11-18 04:07:16.148271] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.761 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.762 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.762 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.762 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.762 "name": "raid_bdev1", 00:18:19.762 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:19.762 "strip_size_kb": 0, 00:18:19.762 "state": "online", 00:18:19.762 "raid_level": "raid1", 00:18:19.762 "superblock": true, 00:18:19.762 "num_base_bdevs": 2, 00:18:19.762 "num_base_bdevs_discovered": 1, 00:18:19.762 "num_base_bdevs_operational": 1, 00:18:19.762 "base_bdevs_list": [ 00:18:19.762 { 00:18:19.762 "name": null, 00:18:19.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.762 "is_configured": false, 00:18:19.762 "data_offset": 0, 00:18:19.762 "data_size": 7936 00:18:19.762 }, 00:18:19.762 { 00:18:19.762 "name": "BaseBdev2", 00:18:19.762 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:19.762 "is_configured": true, 00:18:19.762 "data_offset": 256, 00:18:19.762 "data_size": 7936 00:18:19.762 } 00:18:19.762 ] 00:18:19.762 }' 00:18:19.762 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.762 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.022 "name": "raid_bdev1", 00:18:20.022 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:20.022 "strip_size_kb": 0, 00:18:20.022 "state": "online", 00:18:20.022 "raid_level": "raid1", 00:18:20.022 "superblock": true, 00:18:20.022 "num_base_bdevs": 2, 00:18:20.022 "num_base_bdevs_discovered": 1, 00:18:20.022 "num_base_bdevs_operational": 1, 00:18:20.022 "base_bdevs_list": [ 00:18:20.022 { 00:18:20.022 "name": null, 00:18:20.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.022 "is_configured": false, 00:18:20.022 "data_offset": 0, 00:18:20.022 "data_size": 7936 00:18:20.022 }, 00:18:20.022 { 00:18:20.022 "name": "BaseBdev2", 00:18:20.022 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:20.022 "is_configured": true, 00:18:20.022 "data_offset": 256, 00:18:20.022 "data_size": 7936 00:18:20.022 } 00:18:20.022 ] 00:18:20.022 }' 00:18:20.022 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.283 [2024-11-18 04:07:16.745793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:20.283 [2024-11-18 04:07:16.745863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.283 [2024-11-18 04:07:16.745885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:20.283 [2024-11-18 04:07:16.745895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.283 [2024-11-18 04:07:16.746094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.283 [2024-11-18 04:07:16.746125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:20.283 [2024-11-18 04:07:16.746169] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:20.283 [2024-11-18 04:07:16.746182] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.283 [2024-11-18 04:07:16.746190] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:20.283 [2024-11-18 04:07:16.746199] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:20.283 BaseBdev1 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.283 04:07:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:21.223 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.223 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.223 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.224 "name": "raid_bdev1", 00:18:21.224 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:21.224 "strip_size_kb": 0, 00:18:21.224 "state": "online", 00:18:21.224 "raid_level": "raid1", 00:18:21.224 "superblock": true, 00:18:21.224 "num_base_bdevs": 2, 00:18:21.224 "num_base_bdevs_discovered": 1, 00:18:21.224 "num_base_bdevs_operational": 1, 00:18:21.224 "base_bdevs_list": [ 00:18:21.224 { 00:18:21.224 "name": null, 00:18:21.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.224 "is_configured": false, 00:18:21.224 "data_offset": 0, 00:18:21.224 "data_size": 7936 00:18:21.224 }, 00:18:21.224 { 00:18:21.224 "name": "BaseBdev2", 00:18:21.224 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:21.224 "is_configured": true, 00:18:21.224 "data_offset": 256, 00:18:21.224 "data_size": 7936 00:18:21.224 } 00:18:21.224 ] 00:18:21.224 }' 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.224 04:07:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.794 "name": "raid_bdev1", 00:18:21.794 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:21.794 "strip_size_kb": 0, 00:18:21.794 "state": "online", 00:18:21.794 "raid_level": "raid1", 00:18:21.794 "superblock": true, 00:18:21.794 "num_base_bdevs": 2, 00:18:21.794 "num_base_bdevs_discovered": 1, 00:18:21.794 "num_base_bdevs_operational": 1, 00:18:21.794 "base_bdevs_list": [ 00:18:21.794 { 00:18:21.794 "name": null, 00:18:21.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.794 "is_configured": false, 00:18:21.794 "data_offset": 0, 00:18:21.794 "data_size": 7936 00:18:21.794 }, 00:18:21.794 { 00:18:21.794 "name": "BaseBdev2", 00:18:21.794 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:21.794 "is_configured": true, 00:18:21.794 "data_offset": 256, 00:18:21.794 "data_size": 7936 00:18:21.794 } 00:18:21.794 ] 00:18:21.794 }' 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.794 [2024-11-18 04:07:18.394999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.794 [2024-11-18 04:07:18.395112] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:21.794 [2024-11-18 04:07:18.395127] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:21.794 request: 00:18:21.794 { 00:18:21.794 "base_bdev": "BaseBdev1", 00:18:21.794 "raid_bdev": "raid_bdev1", 00:18:21.794 "method": "bdev_raid_add_base_bdev", 00:18:21.794 "req_id": 1 00:18:21.794 } 00:18:21.794 Got JSON-RPC error response 00:18:21.794 response: 00:18:21.794 { 00:18:21.794 "code": -22, 00:18:21.794 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:21.794 } 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.794 04:07:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.189 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.189 "name": "raid_bdev1", 00:18:23.189 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:23.189 "strip_size_kb": 0, 00:18:23.189 "state": "online", 00:18:23.189 "raid_level": "raid1", 00:18:23.189 "superblock": true, 00:18:23.189 "num_base_bdevs": 2, 00:18:23.189 "num_base_bdevs_discovered": 1, 00:18:23.189 "num_base_bdevs_operational": 1, 00:18:23.189 "base_bdevs_list": [ 00:18:23.189 { 00:18:23.189 "name": null, 00:18:23.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.189 "is_configured": false, 00:18:23.190 "data_offset": 0, 00:18:23.190 "data_size": 7936 00:18:23.190 }, 00:18:23.190 { 00:18:23.190 "name": "BaseBdev2", 00:18:23.190 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:23.190 "is_configured": true, 00:18:23.190 "data_offset": 256, 00:18:23.190 "data_size": 7936 00:18:23.190 } 00:18:23.190 ] 00:18:23.190 }' 00:18:23.190 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.190 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.450 "name": "raid_bdev1", 00:18:23.450 "uuid": "e798b9dd-0cb0-4662-86e4-9d5fa73b6bc0", 00:18:23.450 "strip_size_kb": 0, 00:18:23.450 "state": "online", 00:18:23.450 "raid_level": "raid1", 00:18:23.450 "superblock": true, 00:18:23.450 "num_base_bdevs": 2, 00:18:23.450 "num_base_bdevs_discovered": 1, 00:18:23.450 "num_base_bdevs_operational": 1, 00:18:23.450 "base_bdevs_list": [ 00:18:23.450 { 00:18:23.450 "name": null, 00:18:23.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.450 "is_configured": false, 00:18:23.450 "data_offset": 0, 00:18:23.450 "data_size": 7936 00:18:23.450 }, 00:18:23.450 { 00:18:23.450 "name": "BaseBdev2", 00:18:23.450 "uuid": "f8087e97-aa95-55fd-b289-3aedd2a1d937", 00:18:23.450 "is_configured": true, 00:18:23.450 "data_offset": 256, 00:18:23.450 "data_size": 7936 00:18:23.450 } 00:18:23.450 ] 00:18:23.450 }' 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.450 04:07:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.450 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.450 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87659 00:18:23.450 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87659 ']' 00:18:23.450 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87659 00:18:23.450 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:23.450 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.450 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87659 00:18:23.450 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.450 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.450 killing process with pid 87659 00:18:23.450 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87659' 00:18:23.450 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87659 00:18:23.450 Received shutdown signal, test time was about 60.000000 seconds 00:18:23.450 00:18:23.450 Latency(us) 00:18:23.450 [2024-11-18T04:07:20.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.450 [2024-11-18T04:07:20.091Z] =================================================================================================================== 00:18:23.450 [2024-11-18T04:07:20.092Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:23.451 [2024-11-18 04:07:20.063011] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:23.451 [2024-11-18 04:07:20.063105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.451 [2024-11-18 04:07:20.063142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.451 [2024-11-18 04:07:20.063153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:23.451 04:07:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87659 00:18:24.021 [2024-11-18 04:07:20.362778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:24.961 04:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:24.961 00:18:24.961 real 0m19.629s 00:18:24.961 user 0m25.716s 00:18:24.961 sys 0m2.629s 00:18:24.961 04:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.961 04:07:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.961 ************************************ 00:18:24.961 END TEST raid_rebuild_test_sb_md_separate 00:18:24.961 ************************************ 00:18:24.961 04:07:21 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:24.961 04:07:21 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:24.961 04:07:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:24.961 04:07:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.961 04:07:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.961 ************************************ 00:18:24.961 START TEST raid_state_function_test_sb_md_interleaved 00:18:24.961 ************************************ 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:24.961 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88350 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:24.962 Process raid pid: 88350 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88350' 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88350 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88350 ']' 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.962 04:07:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.962 [2024-11-18 04:07:21.561006] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:24.962 [2024-11-18 04:07:21.561132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.221 [2024-11-18 04:07:21.744629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.221 [2024-11-18 04:07:21.853133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.481 [2024-11-18 04:07:22.048605] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.481 [2024-11-18 04:07:22.048647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.742 [2024-11-18 04:07:22.370724] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:25.742 [2024-11-18 04:07:22.370788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:25.742 [2024-11-18 04:07:22.370797] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.742 [2024-11-18 04:07:22.370807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.742 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.002 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.002 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.002 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.002 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.002 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.002 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.002 "name": "Existed_Raid", 00:18:26.002 "uuid": "6a451545-bb50-4631-a8bb-07e9ef889c8b", 00:18:26.002 "strip_size_kb": 0, 00:18:26.002 "state": "configuring", 00:18:26.002 "raid_level": "raid1", 00:18:26.002 "superblock": true, 00:18:26.002 "num_base_bdevs": 2, 00:18:26.002 "num_base_bdevs_discovered": 0, 00:18:26.002 "num_base_bdevs_operational": 2, 00:18:26.002 "base_bdevs_list": [ 00:18:26.002 { 00:18:26.002 "name": "BaseBdev1", 00:18:26.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.002 "is_configured": false, 00:18:26.002 "data_offset": 0, 00:18:26.002 "data_size": 0 00:18:26.002 }, 00:18:26.002 { 00:18:26.002 "name": "BaseBdev2", 00:18:26.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.002 "is_configured": false, 00:18:26.002 "data_offset": 0, 00:18:26.002 "data_size": 0 00:18:26.002 } 00:18:26.002 ] 00:18:26.002 }' 00:18:26.002 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.002 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.262 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:26.262 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.262 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.262 [2024-11-18 04:07:22.861816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.262 [2024-11-18 04:07:22.861859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:26.262 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.262 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:26.262 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.262 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.262 [2024-11-18 04:07:22.873805] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.262 [2024-11-18 04:07:22.873856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.262 [2024-11-18 04:07:22.873864] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.262 [2024-11-18 04:07:22.873875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.262 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.262 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:26.262 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.262 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.522 [2024-11-18 04:07:22.919256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.522 BaseBdev1 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.522 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.522 [ 00:18:26.522 { 00:18:26.522 "name": "BaseBdev1", 00:18:26.522 "aliases": [ 00:18:26.522 "d8c91e90-36e7-4fd9-af31-ea093c344135" 00:18:26.522 ], 00:18:26.522 "product_name": "Malloc disk", 00:18:26.522 "block_size": 4128, 00:18:26.522 "num_blocks": 8192, 00:18:26.522 "uuid": "d8c91e90-36e7-4fd9-af31-ea093c344135", 00:18:26.522 "md_size": 32, 00:18:26.522 "md_interleave": true, 00:18:26.522 "dif_type": 0, 00:18:26.522 "assigned_rate_limits": { 00:18:26.522 "rw_ios_per_sec": 0, 00:18:26.522 "rw_mbytes_per_sec": 0, 00:18:26.522 "r_mbytes_per_sec": 0, 00:18:26.522 "w_mbytes_per_sec": 0 00:18:26.522 }, 00:18:26.522 "claimed": true, 00:18:26.522 "claim_type": "exclusive_write", 00:18:26.522 "zoned": false, 00:18:26.522 "supported_io_types": { 00:18:26.522 "read": true, 00:18:26.522 "write": true, 00:18:26.522 "unmap": true, 00:18:26.522 "flush": true, 00:18:26.522 "reset": true, 00:18:26.522 "nvme_admin": false, 00:18:26.522 "nvme_io": false, 00:18:26.522 "nvme_io_md": false, 00:18:26.522 "write_zeroes": true, 00:18:26.522 "zcopy": true, 00:18:26.523 "get_zone_info": false, 00:18:26.523 "zone_management": false, 00:18:26.523 "zone_append": false, 00:18:26.523 "compare": false, 00:18:26.523 "compare_and_write": false, 00:18:26.523 "abort": true, 00:18:26.523 "seek_hole": false, 00:18:26.523 "seek_data": false, 00:18:26.523 "copy": true, 00:18:26.523 "nvme_iov_md": false 00:18:26.523 }, 00:18:26.523 "memory_domains": [ 00:18:26.523 { 00:18:26.523 "dma_device_id": "system", 00:18:26.523 "dma_device_type": 1 00:18:26.523 }, 00:18:26.523 { 00:18:26.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.523 "dma_device_type": 2 00:18:26.523 } 00:18:26.523 ], 00:18:26.523 "driver_specific": {} 00:18:26.523 } 00:18:26.523 ] 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.523 04:07:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.523 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.523 "name": "Existed_Raid", 00:18:26.523 "uuid": "c679f366-37bf-4a2e-b94e-c821f572d8ee", 00:18:26.523 "strip_size_kb": 0, 00:18:26.523 "state": "configuring", 00:18:26.523 "raid_level": "raid1", 00:18:26.523 "superblock": true, 00:18:26.523 "num_base_bdevs": 2, 00:18:26.523 "num_base_bdevs_discovered": 1, 00:18:26.523 "num_base_bdevs_operational": 2, 00:18:26.523 "base_bdevs_list": [ 00:18:26.523 { 00:18:26.523 "name": "BaseBdev1", 00:18:26.523 "uuid": "d8c91e90-36e7-4fd9-af31-ea093c344135", 00:18:26.523 "is_configured": true, 00:18:26.523 "data_offset": 256, 00:18:26.523 "data_size": 7936 00:18:26.523 }, 00:18:26.523 { 00:18:26.523 "name": "BaseBdev2", 00:18:26.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.523 "is_configured": false, 00:18:26.523 "data_offset": 0, 00:18:26.523 "data_size": 0 00:18:26.523 } 00:18:26.523 ] 00:18:26.523 }' 00:18:26.523 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.523 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.783 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:26.783 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.783 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.783 [2024-11-18 04:07:23.414485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.783 [2024-11-18 04:07:23.414524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:26.783 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.783 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:26.783 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.783 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.042 [2024-11-18 04:07:23.426520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.042 [2024-11-18 04:07:23.428302] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.042 [2024-11-18 04:07:23.428343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.042 "name": "Existed_Raid", 00:18:27.042 "uuid": "fffdfec9-ac3d-42f2-8564-682498114985", 00:18:27.042 "strip_size_kb": 0, 00:18:27.042 "state": "configuring", 00:18:27.042 "raid_level": "raid1", 00:18:27.042 "superblock": true, 00:18:27.042 "num_base_bdevs": 2, 00:18:27.042 "num_base_bdevs_discovered": 1, 00:18:27.042 "num_base_bdevs_operational": 2, 00:18:27.042 "base_bdevs_list": [ 00:18:27.042 { 00:18:27.042 "name": "BaseBdev1", 00:18:27.042 "uuid": "d8c91e90-36e7-4fd9-af31-ea093c344135", 00:18:27.042 "is_configured": true, 00:18:27.042 "data_offset": 256, 00:18:27.042 "data_size": 7936 00:18:27.042 }, 00:18:27.042 { 00:18:27.042 "name": "BaseBdev2", 00:18:27.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.042 "is_configured": false, 00:18:27.042 "data_offset": 0, 00:18:27.042 "data_size": 0 00:18:27.042 } 00:18:27.042 ] 00:18:27.042 }' 00:18:27.042 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.043 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.303 [2024-11-18 04:07:23.916368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.303 [2024-11-18 04:07:23.916590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:27.303 [2024-11-18 04:07:23.916602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:27.303 [2024-11-18 04:07:23.916701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:27.303 [2024-11-18 04:07:23.916777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:27.303 [2024-11-18 04:07:23.916808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:27.303 [2024-11-18 04:07:23.916881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.303 BaseBdev2 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.303 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.303 [ 00:18:27.303 { 00:18:27.303 "name": "BaseBdev2", 00:18:27.303 "aliases": [ 00:18:27.303 "7f3688b8-397b-4e59-96c2-1d1464bff8bc" 00:18:27.303 ], 00:18:27.303 "product_name": "Malloc disk", 00:18:27.303 "block_size": 4128, 00:18:27.303 "num_blocks": 8192, 00:18:27.563 "uuid": "7f3688b8-397b-4e59-96c2-1d1464bff8bc", 00:18:27.563 "md_size": 32, 00:18:27.563 "md_interleave": true, 00:18:27.563 "dif_type": 0, 00:18:27.563 "assigned_rate_limits": { 00:18:27.563 "rw_ios_per_sec": 0, 00:18:27.563 "rw_mbytes_per_sec": 0, 00:18:27.563 "r_mbytes_per_sec": 0, 00:18:27.563 "w_mbytes_per_sec": 0 00:18:27.563 }, 00:18:27.563 "claimed": true, 00:18:27.563 "claim_type": "exclusive_write", 00:18:27.563 "zoned": false, 00:18:27.563 "supported_io_types": { 00:18:27.563 "read": true, 00:18:27.563 "write": true, 00:18:27.563 "unmap": true, 00:18:27.563 "flush": true, 00:18:27.563 "reset": true, 00:18:27.563 "nvme_admin": false, 00:18:27.563 "nvme_io": false, 00:18:27.563 "nvme_io_md": false, 00:18:27.563 "write_zeroes": true, 00:18:27.563 "zcopy": true, 00:18:27.563 "get_zone_info": false, 00:18:27.563 "zone_management": false, 00:18:27.563 "zone_append": false, 00:18:27.563 "compare": false, 00:18:27.563 "compare_and_write": false, 00:18:27.563 "abort": true, 00:18:27.563 "seek_hole": false, 00:18:27.563 "seek_data": false, 00:18:27.563 "copy": true, 00:18:27.563 "nvme_iov_md": false 00:18:27.563 }, 00:18:27.563 "memory_domains": [ 00:18:27.563 { 00:18:27.563 "dma_device_id": "system", 00:18:27.563 "dma_device_type": 1 00:18:27.563 }, 00:18:27.563 { 00:18:27.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.563 "dma_device_type": 2 00:18:27.563 } 00:18:27.563 ], 00:18:27.563 "driver_specific": {} 00:18:27.563 } 00:18:27.563 ] 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.563 04:07:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.563 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.563 "name": "Existed_Raid", 00:18:27.563 "uuid": "fffdfec9-ac3d-42f2-8564-682498114985", 00:18:27.563 "strip_size_kb": 0, 00:18:27.563 "state": "online", 00:18:27.563 "raid_level": "raid1", 00:18:27.563 "superblock": true, 00:18:27.563 "num_base_bdevs": 2, 00:18:27.563 "num_base_bdevs_discovered": 2, 00:18:27.564 "num_base_bdevs_operational": 2, 00:18:27.564 "base_bdevs_list": [ 00:18:27.564 { 00:18:27.564 "name": "BaseBdev1", 00:18:27.564 "uuid": "d8c91e90-36e7-4fd9-af31-ea093c344135", 00:18:27.564 "is_configured": true, 00:18:27.564 "data_offset": 256, 00:18:27.564 "data_size": 7936 00:18:27.564 }, 00:18:27.564 { 00:18:27.564 "name": "BaseBdev2", 00:18:27.564 "uuid": "7f3688b8-397b-4e59-96c2-1d1464bff8bc", 00:18:27.564 "is_configured": true, 00:18:27.564 "data_offset": 256, 00:18:27.564 "data_size": 7936 00:18:27.564 } 00:18:27.564 ] 00:18:27.564 }' 00:18:27.564 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.564 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.824 [2024-11-18 04:07:24.415835] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:27.824 "name": "Existed_Raid", 00:18:27.824 "aliases": [ 00:18:27.824 "fffdfec9-ac3d-42f2-8564-682498114985" 00:18:27.824 ], 00:18:27.824 "product_name": "Raid Volume", 00:18:27.824 "block_size": 4128, 00:18:27.824 "num_blocks": 7936, 00:18:27.824 "uuid": "fffdfec9-ac3d-42f2-8564-682498114985", 00:18:27.824 "md_size": 32, 00:18:27.824 "md_interleave": true, 00:18:27.824 "dif_type": 0, 00:18:27.824 "assigned_rate_limits": { 00:18:27.824 "rw_ios_per_sec": 0, 00:18:27.824 "rw_mbytes_per_sec": 0, 00:18:27.824 "r_mbytes_per_sec": 0, 00:18:27.824 "w_mbytes_per_sec": 0 00:18:27.824 }, 00:18:27.824 "claimed": false, 00:18:27.824 "zoned": false, 00:18:27.824 "supported_io_types": { 00:18:27.824 "read": true, 00:18:27.824 "write": true, 00:18:27.824 "unmap": false, 00:18:27.824 "flush": false, 00:18:27.824 "reset": true, 00:18:27.824 "nvme_admin": false, 00:18:27.824 "nvme_io": false, 00:18:27.824 "nvme_io_md": false, 00:18:27.824 "write_zeroes": true, 00:18:27.824 "zcopy": false, 00:18:27.824 "get_zone_info": false, 00:18:27.824 "zone_management": false, 00:18:27.824 "zone_append": false, 00:18:27.824 "compare": false, 00:18:27.824 "compare_and_write": false, 00:18:27.824 "abort": false, 00:18:27.824 "seek_hole": false, 00:18:27.824 "seek_data": false, 00:18:27.824 "copy": false, 00:18:27.824 "nvme_iov_md": false 00:18:27.824 }, 00:18:27.824 "memory_domains": [ 00:18:27.824 { 00:18:27.824 "dma_device_id": "system", 00:18:27.824 "dma_device_type": 1 00:18:27.824 }, 00:18:27.824 { 00:18:27.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.824 "dma_device_type": 2 00:18:27.824 }, 00:18:27.824 { 00:18:27.824 "dma_device_id": "system", 00:18:27.824 "dma_device_type": 1 00:18:27.824 }, 00:18:27.824 { 00:18:27.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.824 "dma_device_type": 2 00:18:27.824 } 00:18:27.824 ], 00:18:27.824 "driver_specific": { 00:18:27.824 "raid": { 00:18:27.824 "uuid": "fffdfec9-ac3d-42f2-8564-682498114985", 00:18:27.824 "strip_size_kb": 0, 00:18:27.824 "state": "online", 00:18:27.824 "raid_level": "raid1", 00:18:27.824 "superblock": true, 00:18:27.824 "num_base_bdevs": 2, 00:18:27.824 "num_base_bdevs_discovered": 2, 00:18:27.824 "num_base_bdevs_operational": 2, 00:18:27.824 "base_bdevs_list": [ 00:18:27.824 { 00:18:27.824 "name": "BaseBdev1", 00:18:27.824 "uuid": "d8c91e90-36e7-4fd9-af31-ea093c344135", 00:18:27.824 "is_configured": true, 00:18:27.824 "data_offset": 256, 00:18:27.824 "data_size": 7936 00:18:27.824 }, 00:18:27.824 { 00:18:27.824 "name": "BaseBdev2", 00:18:27.824 "uuid": "7f3688b8-397b-4e59-96c2-1d1464bff8bc", 00:18:27.824 "is_configured": true, 00:18:27.824 "data_offset": 256, 00:18:27.824 "data_size": 7936 00:18:27.824 } 00:18:27.824 ] 00:18:27.824 } 00:18:27.824 } 00:18:27.824 }' 00:18:27.824 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:28.085 BaseBdev2' 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.085 [2024-11-18 04:07:24.603307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.085 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.345 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.345 "name": "Existed_Raid", 00:18:28.345 "uuid": "fffdfec9-ac3d-42f2-8564-682498114985", 00:18:28.345 "strip_size_kb": 0, 00:18:28.345 "state": "online", 00:18:28.345 "raid_level": "raid1", 00:18:28.345 "superblock": true, 00:18:28.345 "num_base_bdevs": 2, 00:18:28.345 "num_base_bdevs_discovered": 1, 00:18:28.345 "num_base_bdevs_operational": 1, 00:18:28.345 "base_bdevs_list": [ 00:18:28.345 { 00:18:28.345 "name": null, 00:18:28.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.345 "is_configured": false, 00:18:28.345 "data_offset": 0, 00:18:28.345 "data_size": 7936 00:18:28.345 }, 00:18:28.345 { 00:18:28.345 "name": "BaseBdev2", 00:18:28.345 "uuid": "7f3688b8-397b-4e59-96c2-1d1464bff8bc", 00:18:28.345 "is_configured": true, 00:18:28.345 "data_offset": 256, 00:18:28.345 "data_size": 7936 00:18:28.345 } 00:18:28.345 ] 00:18:28.345 }' 00:18:28.345 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.345 04:07:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.606 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.606 [2024-11-18 04:07:25.238086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:28.606 [2024-11-18 04:07:25.238204] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.867 [2024-11-18 04:07:25.327499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.867 [2024-11-18 04:07:25.327541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.867 [2024-11-18 04:07:25.327568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88350 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88350 ']' 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88350 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88350 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.867 killing process with pid 88350 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88350' 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88350 00:18:28.867 [2024-11-18 04:07:25.419325] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.867 04:07:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88350 00:18:28.867 [2024-11-18 04:07:25.434778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:30.285 04:07:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:30.285 00:18:30.285 real 0m5.011s 00:18:30.285 user 0m7.273s 00:18:30.285 sys 0m0.907s 00:18:30.285 04:07:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.285 04:07:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.285 ************************************ 00:18:30.285 END TEST raid_state_function_test_sb_md_interleaved 00:18:30.285 ************************************ 00:18:30.285 04:07:26 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:30.285 04:07:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:30.285 04:07:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.285 04:07:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:30.285 ************************************ 00:18:30.285 START TEST raid_superblock_test_md_interleaved 00:18:30.285 ************************************ 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88602 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88602 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88602 ']' 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.285 04:07:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.285 [2024-11-18 04:07:26.635821] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:30.285 [2024-11-18 04:07:26.635955] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88602 ] 00:18:30.285 [2024-11-18 04:07:26.808186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.285 [2024-11-18 04:07:26.916388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.545 [2024-11-18 04:07:27.107819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.545 [2024-11-18 04:07:27.107881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.115 malloc1 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.115 [2024-11-18 04:07:27.506540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.115 [2024-11-18 04:07:27.506615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.115 [2024-11-18 04:07:27.506636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:31.115 [2024-11-18 04:07:27.506646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.115 [2024-11-18 04:07:27.508441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.115 [2024-11-18 04:07:27.508477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.115 pt1 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.115 malloc2 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.115 [2024-11-18 04:07:27.561731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:31.115 [2024-11-18 04:07:27.561782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.115 [2024-11-18 04:07:27.561802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:31.115 [2024-11-18 04:07:27.561810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.115 [2024-11-18 04:07:27.563530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.115 [2024-11-18 04:07:27.563565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:31.115 pt2 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.115 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.115 [2024-11-18 04:07:27.573747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.115 [2024-11-18 04:07:27.575482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.116 [2024-11-18 04:07:27.575648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:31.116 [2024-11-18 04:07:27.575660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:31.116 [2024-11-18 04:07:27.575724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:31.116 [2024-11-18 04:07:27.575803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:31.116 [2024-11-18 04:07:27.575819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:31.116 [2024-11-18 04:07:27.575900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.116 "name": "raid_bdev1", 00:18:31.116 "uuid": "6d690cdc-5b17-41bd-8f9d-6f452022aa57", 00:18:31.116 "strip_size_kb": 0, 00:18:31.116 "state": "online", 00:18:31.116 "raid_level": "raid1", 00:18:31.116 "superblock": true, 00:18:31.116 "num_base_bdevs": 2, 00:18:31.116 "num_base_bdevs_discovered": 2, 00:18:31.116 "num_base_bdevs_operational": 2, 00:18:31.116 "base_bdevs_list": [ 00:18:31.116 { 00:18:31.116 "name": "pt1", 00:18:31.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.116 "is_configured": true, 00:18:31.116 "data_offset": 256, 00:18:31.116 "data_size": 7936 00:18:31.116 }, 00:18:31.116 { 00:18:31.116 "name": "pt2", 00:18:31.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.116 "is_configured": true, 00:18:31.116 "data_offset": 256, 00:18:31.116 "data_size": 7936 00:18:31.116 } 00:18:31.116 ] 00:18:31.116 }' 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.116 04:07:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.686 [2024-11-18 04:07:28.057108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.686 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:31.686 "name": "raid_bdev1", 00:18:31.686 "aliases": [ 00:18:31.686 "6d690cdc-5b17-41bd-8f9d-6f452022aa57" 00:18:31.686 ], 00:18:31.686 "product_name": "Raid Volume", 00:18:31.686 "block_size": 4128, 00:18:31.686 "num_blocks": 7936, 00:18:31.686 "uuid": "6d690cdc-5b17-41bd-8f9d-6f452022aa57", 00:18:31.686 "md_size": 32, 00:18:31.686 "md_interleave": true, 00:18:31.686 "dif_type": 0, 00:18:31.686 "assigned_rate_limits": { 00:18:31.686 "rw_ios_per_sec": 0, 00:18:31.686 "rw_mbytes_per_sec": 0, 00:18:31.686 "r_mbytes_per_sec": 0, 00:18:31.686 "w_mbytes_per_sec": 0 00:18:31.687 }, 00:18:31.687 "claimed": false, 00:18:31.687 "zoned": false, 00:18:31.687 "supported_io_types": { 00:18:31.687 "read": true, 00:18:31.687 "write": true, 00:18:31.687 "unmap": false, 00:18:31.687 "flush": false, 00:18:31.687 "reset": true, 00:18:31.687 "nvme_admin": false, 00:18:31.687 "nvme_io": false, 00:18:31.687 "nvme_io_md": false, 00:18:31.687 "write_zeroes": true, 00:18:31.687 "zcopy": false, 00:18:31.687 "get_zone_info": false, 00:18:31.687 "zone_management": false, 00:18:31.687 "zone_append": false, 00:18:31.687 "compare": false, 00:18:31.687 "compare_and_write": false, 00:18:31.687 "abort": false, 00:18:31.687 "seek_hole": false, 00:18:31.687 "seek_data": false, 00:18:31.687 "copy": false, 00:18:31.687 "nvme_iov_md": false 00:18:31.687 }, 00:18:31.687 "memory_domains": [ 00:18:31.687 { 00:18:31.687 "dma_device_id": "system", 00:18:31.687 "dma_device_type": 1 00:18:31.687 }, 00:18:31.687 { 00:18:31.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.687 "dma_device_type": 2 00:18:31.687 }, 00:18:31.687 { 00:18:31.687 "dma_device_id": "system", 00:18:31.687 "dma_device_type": 1 00:18:31.687 }, 00:18:31.687 { 00:18:31.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.687 "dma_device_type": 2 00:18:31.687 } 00:18:31.687 ], 00:18:31.687 "driver_specific": { 00:18:31.687 "raid": { 00:18:31.687 "uuid": "6d690cdc-5b17-41bd-8f9d-6f452022aa57", 00:18:31.687 "strip_size_kb": 0, 00:18:31.687 "state": "online", 00:18:31.687 "raid_level": "raid1", 00:18:31.687 "superblock": true, 00:18:31.687 "num_base_bdevs": 2, 00:18:31.687 "num_base_bdevs_discovered": 2, 00:18:31.687 "num_base_bdevs_operational": 2, 00:18:31.687 "base_bdevs_list": [ 00:18:31.687 { 00:18:31.687 "name": "pt1", 00:18:31.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.687 "is_configured": true, 00:18:31.687 "data_offset": 256, 00:18:31.687 "data_size": 7936 00:18:31.687 }, 00:18:31.687 { 00:18:31.687 "name": "pt2", 00:18:31.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.687 "is_configured": true, 00:18:31.687 "data_offset": 256, 00:18:31.687 "data_size": 7936 00:18:31.687 } 00:18:31.687 ] 00:18:31.687 } 00:18:31.687 } 00:18:31.687 }' 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:31.687 pt2' 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:31.687 [2024-11-18 04:07:28.260713] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6d690cdc-5b17-41bd-8f9d-6f452022aa57 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 6d690cdc-5b17-41bd-8f9d-6f452022aa57 ']' 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.687 [2024-11-18 04:07:28.308402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.687 [2024-11-18 04:07:28.308426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.687 [2024-11-18 04:07:28.308492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.687 [2024-11-18 04:07:28.308532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.687 [2024-11-18 04:07:28.308543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:31.687 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.947 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.947 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:31.947 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:31.947 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:31.947 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.948 [2024-11-18 04:07:28.448233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:31.948 [2024-11-18 04:07:28.449962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:31.948 [2024-11-18 04:07:28.450027] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:31.948 [2024-11-18 04:07:28.450098] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:31.948 [2024-11-18 04:07:28.450114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.948 [2024-11-18 04:07:28.450123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:31.948 request: 00:18:31.948 { 00:18:31.948 "name": "raid_bdev1", 00:18:31.948 "raid_level": "raid1", 00:18:31.948 "base_bdevs": [ 00:18:31.948 "malloc1", 00:18:31.948 "malloc2" 00:18:31.948 ], 00:18:31.948 "superblock": false, 00:18:31.948 "method": "bdev_raid_create", 00:18:31.948 "req_id": 1 00:18:31.948 } 00:18:31.948 Got JSON-RPC error response 00:18:31.948 response: 00:18:31.948 { 00:18:31.948 "code": -17, 00:18:31.948 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:31.948 } 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.948 [2024-11-18 04:07:28.516191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.948 [2024-11-18 04:07:28.516235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.948 [2024-11-18 04:07:28.516248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:31.948 [2024-11-18 04:07:28.516258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.948 [2024-11-18 04:07:28.518057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.948 [2024-11-18 04:07:28.518090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.948 [2024-11-18 04:07:28.518131] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:31.948 [2024-11-18 04:07:28.518187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.948 pt1 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.948 "name": "raid_bdev1", 00:18:31.948 "uuid": "6d690cdc-5b17-41bd-8f9d-6f452022aa57", 00:18:31.948 "strip_size_kb": 0, 00:18:31.948 "state": "configuring", 00:18:31.948 "raid_level": "raid1", 00:18:31.948 "superblock": true, 00:18:31.948 "num_base_bdevs": 2, 00:18:31.948 "num_base_bdevs_discovered": 1, 00:18:31.948 "num_base_bdevs_operational": 2, 00:18:31.948 "base_bdevs_list": [ 00:18:31.948 { 00:18:31.948 "name": "pt1", 00:18:31.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.948 "is_configured": true, 00:18:31.948 "data_offset": 256, 00:18:31.948 "data_size": 7936 00:18:31.948 }, 00:18:31.948 { 00:18:31.948 "name": null, 00:18:31.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.948 "is_configured": false, 00:18:31.948 "data_offset": 256, 00:18:31.948 "data_size": 7936 00:18:31.948 } 00:18:31.948 ] 00:18:31.948 }' 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.948 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.518 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:32.518 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:32.518 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:32.518 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:32.518 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.518 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.518 [2024-11-18 04:07:28.987350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:32.518 [2024-11-18 04:07:28.987401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.518 [2024-11-18 04:07:28.987417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:32.518 [2024-11-18 04:07:28.987426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.518 [2024-11-18 04:07:28.987535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.518 [2024-11-18 04:07:28.987547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:32.518 [2024-11-18 04:07:28.987598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:32.518 [2024-11-18 04:07:28.987622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:32.518 [2024-11-18 04:07:28.987694] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:32.518 [2024-11-18 04:07:28.987704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:32.519 [2024-11-18 04:07:28.987766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:32.519 [2024-11-18 04:07:28.987847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:32.519 [2024-11-18 04:07:28.987857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:32.519 [2024-11-18 04:07:28.987912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.519 pt2 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.519 04:07:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.519 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.519 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.519 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.519 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.519 "name": "raid_bdev1", 00:18:32.519 "uuid": "6d690cdc-5b17-41bd-8f9d-6f452022aa57", 00:18:32.519 "strip_size_kb": 0, 00:18:32.519 "state": "online", 00:18:32.519 "raid_level": "raid1", 00:18:32.519 "superblock": true, 00:18:32.519 "num_base_bdevs": 2, 00:18:32.519 "num_base_bdevs_discovered": 2, 00:18:32.519 "num_base_bdevs_operational": 2, 00:18:32.519 "base_bdevs_list": [ 00:18:32.519 { 00:18:32.519 "name": "pt1", 00:18:32.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:32.519 "is_configured": true, 00:18:32.519 "data_offset": 256, 00:18:32.519 "data_size": 7936 00:18:32.519 }, 00:18:32.519 { 00:18:32.519 "name": "pt2", 00:18:32.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:32.519 "is_configured": true, 00:18:32.519 "data_offset": 256, 00:18:32.519 "data_size": 7936 00:18:32.519 } 00:18:32.519 ] 00:18:32.519 }' 00:18:32.519 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.519 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.088 [2024-11-18 04:07:29.438817] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:33.088 "name": "raid_bdev1", 00:18:33.088 "aliases": [ 00:18:33.088 "6d690cdc-5b17-41bd-8f9d-6f452022aa57" 00:18:33.088 ], 00:18:33.088 "product_name": "Raid Volume", 00:18:33.088 "block_size": 4128, 00:18:33.088 "num_blocks": 7936, 00:18:33.088 "uuid": "6d690cdc-5b17-41bd-8f9d-6f452022aa57", 00:18:33.088 "md_size": 32, 00:18:33.088 "md_interleave": true, 00:18:33.088 "dif_type": 0, 00:18:33.088 "assigned_rate_limits": { 00:18:33.088 "rw_ios_per_sec": 0, 00:18:33.088 "rw_mbytes_per_sec": 0, 00:18:33.088 "r_mbytes_per_sec": 0, 00:18:33.088 "w_mbytes_per_sec": 0 00:18:33.088 }, 00:18:33.088 "claimed": false, 00:18:33.088 "zoned": false, 00:18:33.088 "supported_io_types": { 00:18:33.088 "read": true, 00:18:33.088 "write": true, 00:18:33.088 "unmap": false, 00:18:33.088 "flush": false, 00:18:33.088 "reset": true, 00:18:33.088 "nvme_admin": false, 00:18:33.088 "nvme_io": false, 00:18:33.088 "nvme_io_md": false, 00:18:33.088 "write_zeroes": true, 00:18:33.088 "zcopy": false, 00:18:33.088 "get_zone_info": false, 00:18:33.088 "zone_management": false, 00:18:33.088 "zone_append": false, 00:18:33.088 "compare": false, 00:18:33.088 "compare_and_write": false, 00:18:33.088 "abort": false, 00:18:33.088 "seek_hole": false, 00:18:33.088 "seek_data": false, 00:18:33.088 "copy": false, 00:18:33.088 "nvme_iov_md": false 00:18:33.088 }, 00:18:33.088 "memory_domains": [ 00:18:33.088 { 00:18:33.088 "dma_device_id": "system", 00:18:33.088 "dma_device_type": 1 00:18:33.088 }, 00:18:33.088 { 00:18:33.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.088 "dma_device_type": 2 00:18:33.088 }, 00:18:33.088 { 00:18:33.088 "dma_device_id": "system", 00:18:33.088 "dma_device_type": 1 00:18:33.088 }, 00:18:33.088 { 00:18:33.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.088 "dma_device_type": 2 00:18:33.088 } 00:18:33.088 ], 00:18:33.088 "driver_specific": { 00:18:33.088 "raid": { 00:18:33.088 "uuid": "6d690cdc-5b17-41bd-8f9d-6f452022aa57", 00:18:33.088 "strip_size_kb": 0, 00:18:33.088 "state": "online", 00:18:33.088 "raid_level": "raid1", 00:18:33.088 "superblock": true, 00:18:33.088 "num_base_bdevs": 2, 00:18:33.088 "num_base_bdevs_discovered": 2, 00:18:33.088 "num_base_bdevs_operational": 2, 00:18:33.088 "base_bdevs_list": [ 00:18:33.088 { 00:18:33.088 "name": "pt1", 00:18:33.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:33.088 "is_configured": true, 00:18:33.088 "data_offset": 256, 00:18:33.088 "data_size": 7936 00:18:33.088 }, 00:18:33.088 { 00:18:33.088 "name": "pt2", 00:18:33.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.088 "is_configured": true, 00:18:33.088 "data_offset": 256, 00:18:33.088 "data_size": 7936 00:18:33.088 } 00:18:33.088 ] 00:18:33.088 } 00:18:33.088 } 00:18:33.088 }' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:33.088 pt2' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.088 [2024-11-18 04:07:29.678413] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 6d690cdc-5b17-41bd-8f9d-6f452022aa57 '!=' 6d690cdc-5b17-41bd-8f9d-6f452022aa57 ']' 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.088 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.088 [2024-11-18 04:07:29.722149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.349 "name": "raid_bdev1", 00:18:33.349 "uuid": "6d690cdc-5b17-41bd-8f9d-6f452022aa57", 00:18:33.349 "strip_size_kb": 0, 00:18:33.349 "state": "online", 00:18:33.349 "raid_level": "raid1", 00:18:33.349 "superblock": true, 00:18:33.349 "num_base_bdevs": 2, 00:18:33.349 "num_base_bdevs_discovered": 1, 00:18:33.349 "num_base_bdevs_operational": 1, 00:18:33.349 "base_bdevs_list": [ 00:18:33.349 { 00:18:33.349 "name": null, 00:18:33.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.349 "is_configured": false, 00:18:33.349 "data_offset": 0, 00:18:33.349 "data_size": 7936 00:18:33.349 }, 00:18:33.349 { 00:18:33.349 "name": "pt2", 00:18:33.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.349 "is_configured": true, 00:18:33.349 "data_offset": 256, 00:18:33.349 "data_size": 7936 00:18:33.349 } 00:18:33.349 ] 00:18:33.349 }' 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.349 04:07:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.609 [2024-11-18 04:07:30.165433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.609 [2024-11-18 04:07:30.165459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.609 [2024-11-18 04:07:30.165508] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.609 [2024-11-18 04:07:30.165544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.609 [2024-11-18 04:07:30.165554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.609 [2024-11-18 04:07:30.237324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:33.609 [2024-11-18 04:07:30.237383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.609 [2024-11-18 04:07:30.237396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:33.609 [2024-11-18 04:07:30.237406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.609 [2024-11-18 04:07:30.239155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.609 [2024-11-18 04:07:30.239191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:33.609 [2024-11-18 04:07:30.239231] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:33.609 [2024-11-18 04:07:30.239273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:33.609 [2024-11-18 04:07:30.239323] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:33.609 [2024-11-18 04:07:30.239338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:33.609 [2024-11-18 04:07:30.239432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:33.609 [2024-11-18 04:07:30.239495] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:33.609 [2024-11-18 04:07:30.239502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:33.609 [2024-11-18 04:07:30.239553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.609 pt2 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.609 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.870 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.870 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.870 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.870 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.870 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.870 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.870 "name": "raid_bdev1", 00:18:33.870 "uuid": "6d690cdc-5b17-41bd-8f9d-6f452022aa57", 00:18:33.870 "strip_size_kb": 0, 00:18:33.870 "state": "online", 00:18:33.870 "raid_level": "raid1", 00:18:33.870 "superblock": true, 00:18:33.870 "num_base_bdevs": 2, 00:18:33.870 "num_base_bdevs_discovered": 1, 00:18:33.870 "num_base_bdevs_operational": 1, 00:18:33.870 "base_bdevs_list": [ 00:18:33.870 { 00:18:33.870 "name": null, 00:18:33.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.870 "is_configured": false, 00:18:33.870 "data_offset": 256, 00:18:33.870 "data_size": 7936 00:18:33.870 }, 00:18:33.870 { 00:18:33.870 "name": "pt2", 00:18:33.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:33.870 "is_configured": true, 00:18:33.870 "data_offset": 256, 00:18:33.870 "data_size": 7936 00:18:33.870 } 00:18:33.870 ] 00:18:33.870 }' 00:18:33.870 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.870 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.130 [2024-11-18 04:07:30.656568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.130 [2024-11-18 04:07:30.656593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.130 [2024-11-18 04:07:30.656635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.130 [2024-11-18 04:07:30.656691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.130 [2024-11-18 04:07:30.656702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.130 [2024-11-18 04:07:30.716501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:34.130 [2024-11-18 04:07:30.716551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.130 [2024-11-18 04:07:30.716567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:34.130 [2024-11-18 04:07:30.716576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.130 pt1 00:18:34.130 [2024-11-18 04:07:30.718463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.130 [2024-11-18 04:07:30.718495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:34.130 [2024-11-18 04:07:30.718538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:34.130 [2024-11-18 04:07:30.718582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:34.130 [2024-11-18 04:07:30.718661] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:34.130 [2024-11-18 04:07:30.718670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.130 [2024-11-18 04:07:30.718684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:34.130 [2024-11-18 04:07:30.718740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:34.130 [2024-11-18 04:07:30.718794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:34.130 [2024-11-18 04:07:30.718802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:34.130 [2024-11-18 04:07:30.718866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:34.130 [2024-11-18 04:07:30.718924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:34.130 [2024-11-18 04:07:30.718944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:34.130 [2024-11-18 04:07:30.719005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.130 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.131 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.391 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.391 "name": "raid_bdev1", 00:18:34.391 "uuid": "6d690cdc-5b17-41bd-8f9d-6f452022aa57", 00:18:34.391 "strip_size_kb": 0, 00:18:34.391 "state": "online", 00:18:34.391 "raid_level": "raid1", 00:18:34.391 "superblock": true, 00:18:34.391 "num_base_bdevs": 2, 00:18:34.391 "num_base_bdevs_discovered": 1, 00:18:34.391 "num_base_bdevs_operational": 1, 00:18:34.391 "base_bdevs_list": [ 00:18:34.391 { 00:18:34.391 "name": null, 00:18:34.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.391 "is_configured": false, 00:18:34.391 "data_offset": 256, 00:18:34.391 "data_size": 7936 00:18:34.391 }, 00:18:34.391 { 00:18:34.391 "name": "pt2", 00:18:34.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.391 "is_configured": true, 00:18:34.391 "data_offset": 256, 00:18:34.391 "data_size": 7936 00:18:34.391 } 00:18:34.391 ] 00:18:34.391 }' 00:18:34.391 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.391 04:07:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.651 [2024-11-18 04:07:31.219881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 6d690cdc-5b17-41bd-8f9d-6f452022aa57 '!=' 6d690cdc-5b17-41bd-8f9d-6f452022aa57 ']' 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88602 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88602 ']' 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88602 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.651 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88602 00:18:34.911 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.911 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.911 killing process with pid 88602 00:18:34.911 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88602' 00:18:34.911 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88602 00:18:34.911 [2024-11-18 04:07:31.307445] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:34.911 [2024-11-18 04:07:31.307517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.911 [2024-11-18 04:07:31.307557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.911 [2024-11-18 04:07:31.307570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:34.911 04:07:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88602 00:18:34.911 [2024-11-18 04:07:31.498553] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:36.297 04:07:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:36.297 00:18:36.297 real 0m5.972s 00:18:36.297 user 0m9.093s 00:18:36.297 sys 0m1.154s 00:18:36.297 04:07:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.297 04:07:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.297 ************************************ 00:18:36.297 END TEST raid_superblock_test_md_interleaved 00:18:36.297 ************************************ 00:18:36.297 04:07:32 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:36.297 04:07:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:36.297 04:07:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.297 04:07:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:36.297 ************************************ 00:18:36.297 START TEST raid_rebuild_test_sb_md_interleaved 00:18:36.297 ************************************ 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88925 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88925 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88925 ']' 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.297 04:07:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.297 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:36.297 Zero copy mechanism will not be used. 00:18:36.297 [2024-11-18 04:07:32.695969] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:36.297 [2024-11-18 04:07:32.696101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88925 ] 00:18:36.297 [2024-11-18 04:07:32.873186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.557 [2024-11-18 04:07:32.975508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.557 [2024-11-18 04:07:33.174909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.558 [2024-11-18 04:07:33.174944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.128 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.128 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.129 BaseBdev1_malloc 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.129 [2024-11-18 04:07:33.554946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:37.129 [2024-11-18 04:07:33.555024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.129 [2024-11-18 04:07:33.555044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:37.129 [2024-11-18 04:07:33.555055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.129 [2024-11-18 04:07:33.556824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.129 [2024-11-18 04:07:33.556879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:37.129 BaseBdev1 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.129 BaseBdev2_malloc 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.129 [2024-11-18 04:07:33.607957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:37.129 [2024-11-18 04:07:33.608035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.129 [2024-11-18 04:07:33.608053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:37.129 [2024-11-18 04:07:33.608065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.129 [2024-11-18 04:07:33.609796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.129 [2024-11-18 04:07:33.609853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:37.129 BaseBdev2 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.129 spare_malloc 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.129 spare_delay 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.129 [2024-11-18 04:07:33.706521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:37.129 [2024-11-18 04:07:33.706578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.129 [2024-11-18 04:07:33.706597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:37.129 [2024-11-18 04:07:33.706608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.129 [2024-11-18 04:07:33.708366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.129 [2024-11-18 04:07:33.708404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:37.129 spare 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.129 [2024-11-18 04:07:33.718534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:37.129 [2024-11-18 04:07:33.720276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.129 [2024-11-18 04:07:33.720471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:37.129 [2024-11-18 04:07:33.720492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:37.129 [2024-11-18 04:07:33.720565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:37.129 [2024-11-18 04:07:33.720631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:37.129 [2024-11-18 04:07:33.720640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:37.129 [2024-11-18 04:07:33.720700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.129 "name": "raid_bdev1", 00:18:37.129 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:37.129 "strip_size_kb": 0, 00:18:37.129 "state": "online", 00:18:37.129 "raid_level": "raid1", 00:18:37.129 "superblock": true, 00:18:37.129 "num_base_bdevs": 2, 00:18:37.129 "num_base_bdevs_discovered": 2, 00:18:37.129 "num_base_bdevs_operational": 2, 00:18:37.129 "base_bdevs_list": [ 00:18:37.129 { 00:18:37.129 "name": "BaseBdev1", 00:18:37.129 "uuid": "ddc2d516-d902-5633-9f6e-532526b1ae60", 00:18:37.129 "is_configured": true, 00:18:37.129 "data_offset": 256, 00:18:37.129 "data_size": 7936 00:18:37.129 }, 00:18:37.129 { 00:18:37.129 "name": "BaseBdev2", 00:18:37.129 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:37.129 "is_configured": true, 00:18:37.129 "data_offset": 256, 00:18:37.129 "data_size": 7936 00:18:37.129 } 00:18:37.129 ] 00:18:37.129 }' 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.129 04:07:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:37.700 [2024-11-18 04:07:34.185977] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.700 [2024-11-18 04:07:34.281533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.700 "name": "raid_bdev1", 00:18:37.700 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:37.700 "strip_size_kb": 0, 00:18:37.700 "state": "online", 00:18:37.700 "raid_level": "raid1", 00:18:37.700 "superblock": true, 00:18:37.700 "num_base_bdevs": 2, 00:18:37.700 "num_base_bdevs_discovered": 1, 00:18:37.700 "num_base_bdevs_operational": 1, 00:18:37.700 "base_bdevs_list": [ 00:18:37.700 { 00:18:37.700 "name": null, 00:18:37.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.700 "is_configured": false, 00:18:37.700 "data_offset": 0, 00:18:37.700 "data_size": 7936 00:18:37.700 }, 00:18:37.700 { 00:18:37.700 "name": "BaseBdev2", 00:18:37.700 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:37.700 "is_configured": true, 00:18:37.700 "data_offset": 256, 00:18:37.700 "data_size": 7936 00:18:37.700 } 00:18:37.700 ] 00:18:37.700 }' 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.700 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.271 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:38.271 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.271 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.271 [2024-11-18 04:07:34.724799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.271 [2024-11-18 04:07:34.741101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:38.271 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.271 04:07:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:38.271 [2024-11-18 04:07:34.742864] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.211 "name": "raid_bdev1", 00:18:39.211 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:39.211 "strip_size_kb": 0, 00:18:39.211 "state": "online", 00:18:39.211 "raid_level": "raid1", 00:18:39.211 "superblock": true, 00:18:39.211 "num_base_bdevs": 2, 00:18:39.211 "num_base_bdevs_discovered": 2, 00:18:39.211 "num_base_bdevs_operational": 2, 00:18:39.211 "process": { 00:18:39.211 "type": "rebuild", 00:18:39.211 "target": "spare", 00:18:39.211 "progress": { 00:18:39.211 "blocks": 2560, 00:18:39.211 "percent": 32 00:18:39.211 } 00:18:39.211 }, 00:18:39.211 "base_bdevs_list": [ 00:18:39.211 { 00:18:39.211 "name": "spare", 00:18:39.211 "uuid": "c1165286-6f0c-526f-b8f5-f2ed01ee1ac9", 00:18:39.211 "is_configured": true, 00:18:39.211 "data_offset": 256, 00:18:39.211 "data_size": 7936 00:18:39.211 }, 00:18:39.211 { 00:18:39.211 "name": "BaseBdev2", 00:18:39.211 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:39.211 "is_configured": true, 00:18:39.211 "data_offset": 256, 00:18:39.211 "data_size": 7936 00:18:39.211 } 00:18:39.211 ] 00:18:39.211 }' 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.211 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.471 [2024-11-18 04:07:35.882362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.471 [2024-11-18 04:07:35.947240] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:39.471 [2024-11-18 04:07:35.947296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.471 [2024-11-18 04:07:35.947326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.471 [2024-11-18 04:07:35.947337] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.471 04:07:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.471 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.471 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.471 "name": "raid_bdev1", 00:18:39.471 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:39.471 "strip_size_kb": 0, 00:18:39.471 "state": "online", 00:18:39.471 "raid_level": "raid1", 00:18:39.471 "superblock": true, 00:18:39.471 "num_base_bdevs": 2, 00:18:39.471 "num_base_bdevs_discovered": 1, 00:18:39.471 "num_base_bdevs_operational": 1, 00:18:39.471 "base_bdevs_list": [ 00:18:39.471 { 00:18:39.472 "name": null, 00:18:39.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.472 "is_configured": false, 00:18:39.472 "data_offset": 0, 00:18:39.472 "data_size": 7936 00:18:39.472 }, 00:18:39.472 { 00:18:39.472 "name": "BaseBdev2", 00:18:39.472 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:39.472 "is_configured": true, 00:18:39.472 "data_offset": 256, 00:18:39.472 "data_size": 7936 00:18:39.472 } 00:18:39.472 ] 00:18:39.472 }' 00:18:39.472 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.472 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.042 "name": "raid_bdev1", 00:18:40.042 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:40.042 "strip_size_kb": 0, 00:18:40.042 "state": "online", 00:18:40.042 "raid_level": "raid1", 00:18:40.042 "superblock": true, 00:18:40.042 "num_base_bdevs": 2, 00:18:40.042 "num_base_bdevs_discovered": 1, 00:18:40.042 "num_base_bdevs_operational": 1, 00:18:40.042 "base_bdevs_list": [ 00:18:40.042 { 00:18:40.042 "name": null, 00:18:40.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.042 "is_configured": false, 00:18:40.042 "data_offset": 0, 00:18:40.042 "data_size": 7936 00:18:40.042 }, 00:18:40.042 { 00:18:40.042 "name": "BaseBdev2", 00:18:40.042 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:40.042 "is_configured": true, 00:18:40.042 "data_offset": 256, 00:18:40.042 "data_size": 7936 00:18:40.042 } 00:18:40.042 ] 00:18:40.042 }' 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.042 [2024-11-18 04:07:36.567374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:40.042 [2024-11-18 04:07:36.581865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.042 04:07:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:40.042 [2024-11-18 04:07:36.583604] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:40.983 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.983 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.983 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.983 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.983 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.983 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.983 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.983 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.983 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.983 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.245 "name": "raid_bdev1", 00:18:41.245 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:41.245 "strip_size_kb": 0, 00:18:41.245 "state": "online", 00:18:41.245 "raid_level": "raid1", 00:18:41.245 "superblock": true, 00:18:41.245 "num_base_bdevs": 2, 00:18:41.245 "num_base_bdevs_discovered": 2, 00:18:41.245 "num_base_bdevs_operational": 2, 00:18:41.245 "process": { 00:18:41.245 "type": "rebuild", 00:18:41.245 "target": "spare", 00:18:41.245 "progress": { 00:18:41.245 "blocks": 2560, 00:18:41.245 "percent": 32 00:18:41.245 } 00:18:41.245 }, 00:18:41.245 "base_bdevs_list": [ 00:18:41.245 { 00:18:41.245 "name": "spare", 00:18:41.245 "uuid": "c1165286-6f0c-526f-b8f5-f2ed01ee1ac9", 00:18:41.245 "is_configured": true, 00:18:41.245 "data_offset": 256, 00:18:41.245 "data_size": 7936 00:18:41.245 }, 00:18:41.245 { 00:18:41.245 "name": "BaseBdev2", 00:18:41.245 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:41.245 "is_configured": true, 00:18:41.245 "data_offset": 256, 00:18:41.245 "data_size": 7936 00:18:41.245 } 00:18:41.245 ] 00:18:41.245 }' 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:41.245 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=731 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.245 "name": "raid_bdev1", 00:18:41.245 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:41.245 "strip_size_kb": 0, 00:18:41.245 "state": "online", 00:18:41.245 "raid_level": "raid1", 00:18:41.245 "superblock": true, 00:18:41.245 "num_base_bdevs": 2, 00:18:41.245 "num_base_bdevs_discovered": 2, 00:18:41.245 "num_base_bdevs_operational": 2, 00:18:41.245 "process": { 00:18:41.245 "type": "rebuild", 00:18:41.245 "target": "spare", 00:18:41.245 "progress": { 00:18:41.245 "blocks": 2816, 00:18:41.245 "percent": 35 00:18:41.245 } 00:18:41.245 }, 00:18:41.245 "base_bdevs_list": [ 00:18:41.245 { 00:18:41.245 "name": "spare", 00:18:41.245 "uuid": "c1165286-6f0c-526f-b8f5-f2ed01ee1ac9", 00:18:41.245 "is_configured": true, 00:18:41.245 "data_offset": 256, 00:18:41.245 "data_size": 7936 00:18:41.245 }, 00:18:41.245 { 00:18:41.245 "name": "BaseBdev2", 00:18:41.245 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:41.245 "is_configured": true, 00:18:41.245 "data_offset": 256, 00:18:41.245 "data_size": 7936 00:18:41.245 } 00:18:41.245 ] 00:18:41.245 }' 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.245 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.505 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.505 04:07:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.445 "name": "raid_bdev1", 00:18:42.445 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:42.445 "strip_size_kb": 0, 00:18:42.445 "state": "online", 00:18:42.445 "raid_level": "raid1", 00:18:42.445 "superblock": true, 00:18:42.445 "num_base_bdevs": 2, 00:18:42.445 "num_base_bdevs_discovered": 2, 00:18:42.445 "num_base_bdevs_operational": 2, 00:18:42.445 "process": { 00:18:42.445 "type": "rebuild", 00:18:42.445 "target": "spare", 00:18:42.445 "progress": { 00:18:42.445 "blocks": 5888, 00:18:42.445 "percent": 74 00:18:42.445 } 00:18:42.445 }, 00:18:42.445 "base_bdevs_list": [ 00:18:42.445 { 00:18:42.445 "name": "spare", 00:18:42.445 "uuid": "c1165286-6f0c-526f-b8f5-f2ed01ee1ac9", 00:18:42.445 "is_configured": true, 00:18:42.445 "data_offset": 256, 00:18:42.445 "data_size": 7936 00:18:42.445 }, 00:18:42.445 { 00:18:42.445 "name": "BaseBdev2", 00:18:42.445 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:42.445 "is_configured": true, 00:18:42.445 "data_offset": 256, 00:18:42.445 "data_size": 7936 00:18:42.445 } 00:18:42.445 ] 00:18:42.445 }' 00:18:42.445 04:07:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.445 04:07:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.445 04:07:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.445 04:07:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.445 04:07:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.385 [2024-11-18 04:07:39.694428] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:43.385 [2024-11-18 04:07:39.694496] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:43.385 [2024-11-18 04:07:39.694587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.645 "name": "raid_bdev1", 00:18:43.645 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:43.645 "strip_size_kb": 0, 00:18:43.645 "state": "online", 00:18:43.645 "raid_level": "raid1", 00:18:43.645 "superblock": true, 00:18:43.645 "num_base_bdevs": 2, 00:18:43.645 "num_base_bdevs_discovered": 2, 00:18:43.645 "num_base_bdevs_operational": 2, 00:18:43.645 "base_bdevs_list": [ 00:18:43.645 { 00:18:43.645 "name": "spare", 00:18:43.645 "uuid": "c1165286-6f0c-526f-b8f5-f2ed01ee1ac9", 00:18:43.645 "is_configured": true, 00:18:43.645 "data_offset": 256, 00:18:43.645 "data_size": 7936 00:18:43.645 }, 00:18:43.645 { 00:18:43.645 "name": "BaseBdev2", 00:18:43.645 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:43.645 "is_configured": true, 00:18:43.645 "data_offset": 256, 00:18:43.645 "data_size": 7936 00:18:43.645 } 00:18:43.645 ] 00:18:43.645 }' 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.645 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.646 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.646 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.646 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.646 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.646 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.646 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.646 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.646 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.646 "name": "raid_bdev1", 00:18:43.646 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:43.646 "strip_size_kb": 0, 00:18:43.646 "state": "online", 00:18:43.646 "raid_level": "raid1", 00:18:43.646 "superblock": true, 00:18:43.646 "num_base_bdevs": 2, 00:18:43.646 "num_base_bdevs_discovered": 2, 00:18:43.646 "num_base_bdevs_operational": 2, 00:18:43.646 "base_bdevs_list": [ 00:18:43.646 { 00:18:43.646 "name": "spare", 00:18:43.646 "uuid": "c1165286-6f0c-526f-b8f5-f2ed01ee1ac9", 00:18:43.646 "is_configured": true, 00:18:43.646 "data_offset": 256, 00:18:43.646 "data_size": 7936 00:18:43.646 }, 00:18:43.646 { 00:18:43.646 "name": "BaseBdev2", 00:18:43.646 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:43.646 "is_configured": true, 00:18:43.646 "data_offset": 256, 00:18:43.646 "data_size": 7936 00:18:43.646 } 00:18:43.646 ] 00:18:43.646 }' 00:18:43.646 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.646 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.646 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.906 "name": "raid_bdev1", 00:18:43.906 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:43.906 "strip_size_kb": 0, 00:18:43.906 "state": "online", 00:18:43.906 "raid_level": "raid1", 00:18:43.906 "superblock": true, 00:18:43.906 "num_base_bdevs": 2, 00:18:43.906 "num_base_bdevs_discovered": 2, 00:18:43.906 "num_base_bdevs_operational": 2, 00:18:43.906 "base_bdevs_list": [ 00:18:43.906 { 00:18:43.906 "name": "spare", 00:18:43.906 "uuid": "c1165286-6f0c-526f-b8f5-f2ed01ee1ac9", 00:18:43.906 "is_configured": true, 00:18:43.906 "data_offset": 256, 00:18:43.906 "data_size": 7936 00:18:43.906 }, 00:18:43.906 { 00:18:43.906 "name": "BaseBdev2", 00:18:43.906 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:43.906 "is_configured": true, 00:18:43.906 "data_offset": 256, 00:18:43.906 "data_size": 7936 00:18:43.906 } 00:18:43.906 ] 00:18:43.906 }' 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.906 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.166 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:44.166 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.166 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.166 [2024-11-18 04:07:40.772902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.166 [2024-11-18 04:07:40.772934] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.166 [2024-11-18 04:07:40.773005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.166 [2024-11-18 04:07:40.773096] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.166 [2024-11-18 04:07:40.773108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:44.166 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.166 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.166 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:44.166 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.166 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.166 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.427 [2024-11-18 04:07:40.844894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:44.427 [2024-11-18 04:07:40.845001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.427 [2024-11-18 04:07:40.845038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:44.427 [2024-11-18 04:07:40.845065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.427 [2024-11-18 04:07:40.847017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.427 [2024-11-18 04:07:40.847097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:44.427 [2024-11-18 04:07:40.847164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:44.427 [2024-11-18 04:07:40.847246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:44.427 [2024-11-18 04:07:40.847368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:44.427 spare 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.427 [2024-11-18 04:07:40.947292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:44.427 [2024-11-18 04:07:40.947361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:44.427 [2024-11-18 04:07:40.947453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:44.427 [2024-11-18 04:07:40.947566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:44.427 [2024-11-18 04:07:40.947595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:44.427 [2024-11-18 04:07:40.947715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.427 04:07:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.427 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.427 "name": "raid_bdev1", 00:18:44.427 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:44.427 "strip_size_kb": 0, 00:18:44.427 "state": "online", 00:18:44.427 "raid_level": "raid1", 00:18:44.427 "superblock": true, 00:18:44.427 "num_base_bdevs": 2, 00:18:44.427 "num_base_bdevs_discovered": 2, 00:18:44.427 "num_base_bdevs_operational": 2, 00:18:44.427 "base_bdevs_list": [ 00:18:44.427 { 00:18:44.427 "name": "spare", 00:18:44.427 "uuid": "c1165286-6f0c-526f-b8f5-f2ed01ee1ac9", 00:18:44.427 "is_configured": true, 00:18:44.427 "data_offset": 256, 00:18:44.428 "data_size": 7936 00:18:44.428 }, 00:18:44.428 { 00:18:44.428 "name": "BaseBdev2", 00:18:44.428 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:44.428 "is_configured": true, 00:18:44.428 "data_offset": 256, 00:18:44.428 "data_size": 7936 00:18:44.428 } 00:18:44.428 ] 00:18:44.428 }' 00:18:44.428 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.428 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.998 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.998 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.998 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.998 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.998 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.998 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.998 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.998 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.998 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.998 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.998 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.998 "name": "raid_bdev1", 00:18:44.998 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:44.998 "strip_size_kb": 0, 00:18:44.998 "state": "online", 00:18:44.998 "raid_level": "raid1", 00:18:44.998 "superblock": true, 00:18:44.998 "num_base_bdevs": 2, 00:18:44.998 "num_base_bdevs_discovered": 2, 00:18:44.998 "num_base_bdevs_operational": 2, 00:18:44.998 "base_bdevs_list": [ 00:18:44.998 { 00:18:44.998 "name": "spare", 00:18:44.998 "uuid": "c1165286-6f0c-526f-b8f5-f2ed01ee1ac9", 00:18:44.998 "is_configured": true, 00:18:44.998 "data_offset": 256, 00:18:44.998 "data_size": 7936 00:18:44.998 }, 00:18:44.998 { 00:18:44.998 "name": "BaseBdev2", 00:18:44.998 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:44.998 "is_configured": true, 00:18:44.998 "data_offset": 256, 00:18:44.998 "data_size": 7936 00:18:44.998 } 00:18:44.998 ] 00:18:44.998 }' 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.999 [2024-11-18 04:07:41.603699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.999 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.259 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.259 "name": "raid_bdev1", 00:18:45.259 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:45.259 "strip_size_kb": 0, 00:18:45.259 "state": "online", 00:18:45.259 "raid_level": "raid1", 00:18:45.259 "superblock": true, 00:18:45.259 "num_base_bdevs": 2, 00:18:45.259 "num_base_bdevs_discovered": 1, 00:18:45.259 "num_base_bdevs_operational": 1, 00:18:45.259 "base_bdevs_list": [ 00:18:45.259 { 00:18:45.259 "name": null, 00:18:45.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.259 "is_configured": false, 00:18:45.259 "data_offset": 0, 00:18:45.259 "data_size": 7936 00:18:45.259 }, 00:18:45.259 { 00:18:45.259 "name": "BaseBdev2", 00:18:45.259 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:45.259 "is_configured": true, 00:18:45.259 "data_offset": 256, 00:18:45.259 "data_size": 7936 00:18:45.259 } 00:18:45.259 ] 00:18:45.259 }' 00:18:45.259 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.259 04:07:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.520 04:07:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:45.520 04:07:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.520 04:07:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.520 [2024-11-18 04:07:42.046963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.520 [2024-11-18 04:07:42.047131] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:45.520 [2024-11-18 04:07:42.047215] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:45.520 [2024-11-18 04:07:42.047264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.520 [2024-11-18 04:07:42.061999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:45.520 04:07:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.520 [2024-11-18 04:07:42.063740] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:45.520 04:07:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:46.459 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.459 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.459 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.459 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.459 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.459 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.459 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.459 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.459 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.459 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.719 "name": "raid_bdev1", 00:18:46.719 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:46.719 "strip_size_kb": 0, 00:18:46.719 "state": "online", 00:18:46.719 "raid_level": "raid1", 00:18:46.719 "superblock": true, 00:18:46.719 "num_base_bdevs": 2, 00:18:46.719 "num_base_bdevs_discovered": 2, 00:18:46.719 "num_base_bdevs_operational": 2, 00:18:46.719 "process": { 00:18:46.719 "type": "rebuild", 00:18:46.719 "target": "spare", 00:18:46.719 "progress": { 00:18:46.719 "blocks": 2560, 00:18:46.719 "percent": 32 00:18:46.719 } 00:18:46.719 }, 00:18:46.719 "base_bdevs_list": [ 00:18:46.719 { 00:18:46.719 "name": "spare", 00:18:46.719 "uuid": "c1165286-6f0c-526f-b8f5-f2ed01ee1ac9", 00:18:46.719 "is_configured": true, 00:18:46.719 "data_offset": 256, 00:18:46.719 "data_size": 7936 00:18:46.719 }, 00:18:46.719 { 00:18:46.719 "name": "BaseBdev2", 00:18:46.719 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:46.719 "is_configured": true, 00:18:46.719 "data_offset": 256, 00:18:46.719 "data_size": 7936 00:18:46.719 } 00:18:46.719 ] 00:18:46.719 }' 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.719 [2024-11-18 04:07:43.223731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.719 [2024-11-18 04:07:43.268134] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:46.719 [2024-11-18 04:07:43.268206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.719 [2024-11-18 04:07:43.268220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.719 [2024-11-18 04:07:43.268228] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.719 "name": "raid_bdev1", 00:18:46.719 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:46.719 "strip_size_kb": 0, 00:18:46.719 "state": "online", 00:18:46.719 "raid_level": "raid1", 00:18:46.719 "superblock": true, 00:18:46.719 "num_base_bdevs": 2, 00:18:46.719 "num_base_bdevs_discovered": 1, 00:18:46.719 "num_base_bdevs_operational": 1, 00:18:46.719 "base_bdevs_list": [ 00:18:46.719 { 00:18:46.719 "name": null, 00:18:46.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.719 "is_configured": false, 00:18:46.719 "data_offset": 0, 00:18:46.719 "data_size": 7936 00:18:46.719 }, 00:18:46.719 { 00:18:46.719 "name": "BaseBdev2", 00:18:46.719 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:46.719 "is_configured": true, 00:18:46.719 "data_offset": 256, 00:18:46.719 "data_size": 7936 00:18:46.719 } 00:18:46.719 ] 00:18:46.719 }' 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.719 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.289 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:47.289 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.289 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.289 [2024-11-18 04:07:43.772339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:47.289 [2024-11-18 04:07:43.772394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.289 [2024-11-18 04:07:43.772412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:47.289 [2024-11-18 04:07:43.772422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.289 [2024-11-18 04:07:43.772586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.289 [2024-11-18 04:07:43.772601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:47.289 [2024-11-18 04:07:43.772643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:47.289 [2024-11-18 04:07:43.772654] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:47.289 [2024-11-18 04:07:43.772662] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:47.289 [2024-11-18 04:07:43.772686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.289 [2024-11-18 04:07:43.787434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:47.289 spare 00:18:47.289 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.290 04:07:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:47.290 [2024-11-18 04:07:43.789205] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.228 "name": "raid_bdev1", 00:18:48.228 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:48.228 "strip_size_kb": 0, 00:18:48.228 "state": "online", 00:18:48.228 "raid_level": "raid1", 00:18:48.228 "superblock": true, 00:18:48.228 "num_base_bdevs": 2, 00:18:48.228 "num_base_bdevs_discovered": 2, 00:18:48.228 "num_base_bdevs_operational": 2, 00:18:48.228 "process": { 00:18:48.228 "type": "rebuild", 00:18:48.228 "target": "spare", 00:18:48.228 "progress": { 00:18:48.228 "blocks": 2560, 00:18:48.228 "percent": 32 00:18:48.228 } 00:18:48.228 }, 00:18:48.228 "base_bdevs_list": [ 00:18:48.228 { 00:18:48.228 "name": "spare", 00:18:48.228 "uuid": "c1165286-6f0c-526f-b8f5-f2ed01ee1ac9", 00:18:48.228 "is_configured": true, 00:18:48.228 "data_offset": 256, 00:18:48.228 "data_size": 7936 00:18:48.228 }, 00:18:48.228 { 00:18:48.228 "name": "BaseBdev2", 00:18:48.228 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:48.228 "is_configured": true, 00:18:48.228 "data_offset": 256, 00:18:48.228 "data_size": 7936 00:18:48.228 } 00:18:48.228 ] 00:18:48.228 }' 00:18:48.228 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.488 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.488 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.488 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.488 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:48.488 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.488 04:07:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.488 [2024-11-18 04:07:44.949147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.488 [2024-11-18 04:07:44.993539] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:48.488 [2024-11-18 04:07:44.993592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.488 [2024-11-18 04:07:44.993608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.488 [2024-11-18 04:07:44.993614] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.488 "name": "raid_bdev1", 00:18:48.488 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:48.488 "strip_size_kb": 0, 00:18:48.488 "state": "online", 00:18:48.488 "raid_level": "raid1", 00:18:48.488 "superblock": true, 00:18:48.488 "num_base_bdevs": 2, 00:18:48.488 "num_base_bdevs_discovered": 1, 00:18:48.488 "num_base_bdevs_operational": 1, 00:18:48.488 "base_bdevs_list": [ 00:18:48.488 { 00:18:48.488 "name": null, 00:18:48.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.488 "is_configured": false, 00:18:48.488 "data_offset": 0, 00:18:48.488 "data_size": 7936 00:18:48.488 }, 00:18:48.488 { 00:18:48.488 "name": "BaseBdev2", 00:18:48.488 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:48.488 "is_configured": true, 00:18:48.488 "data_offset": 256, 00:18:48.488 "data_size": 7936 00:18:48.488 } 00:18:48.488 ] 00:18:48.488 }' 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.488 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.058 "name": "raid_bdev1", 00:18:49.058 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:49.058 "strip_size_kb": 0, 00:18:49.058 "state": "online", 00:18:49.058 "raid_level": "raid1", 00:18:49.058 "superblock": true, 00:18:49.058 "num_base_bdevs": 2, 00:18:49.058 "num_base_bdevs_discovered": 1, 00:18:49.058 "num_base_bdevs_operational": 1, 00:18:49.058 "base_bdevs_list": [ 00:18:49.058 { 00:18:49.058 "name": null, 00:18:49.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.058 "is_configured": false, 00:18:49.058 "data_offset": 0, 00:18:49.058 "data_size": 7936 00:18:49.058 }, 00:18:49.058 { 00:18:49.058 "name": "BaseBdev2", 00:18:49.058 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:49.058 "is_configured": true, 00:18:49.058 "data_offset": 256, 00:18:49.058 "data_size": 7936 00:18:49.058 } 00:18:49.058 ] 00:18:49.058 }' 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.058 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:49.059 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:49.059 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.059 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.059 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.059 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:49.059 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.059 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.059 [2024-11-18 04:07:45.553216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:49.059 [2024-11-18 04:07:45.553269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.059 [2024-11-18 04:07:45.553291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:49.059 [2024-11-18 04:07:45.553299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.059 [2024-11-18 04:07:45.553436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.059 [2024-11-18 04:07:45.553446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:49.059 [2024-11-18 04:07:45.553491] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:49.059 [2024-11-18 04:07:45.553502] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:49.059 [2024-11-18 04:07:45.553511] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:49.059 [2024-11-18 04:07:45.553519] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:49.059 BaseBdev1 00:18:49.059 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.059 04:07:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.999 "name": "raid_bdev1", 00:18:49.999 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:49.999 "strip_size_kb": 0, 00:18:49.999 "state": "online", 00:18:49.999 "raid_level": "raid1", 00:18:49.999 "superblock": true, 00:18:49.999 "num_base_bdevs": 2, 00:18:49.999 "num_base_bdevs_discovered": 1, 00:18:49.999 "num_base_bdevs_operational": 1, 00:18:49.999 "base_bdevs_list": [ 00:18:49.999 { 00:18:49.999 "name": null, 00:18:49.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.999 "is_configured": false, 00:18:49.999 "data_offset": 0, 00:18:49.999 "data_size": 7936 00:18:49.999 }, 00:18:49.999 { 00:18:49.999 "name": "BaseBdev2", 00:18:49.999 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:49.999 "is_configured": true, 00:18:49.999 "data_offset": 256, 00:18:49.999 "data_size": 7936 00:18:49.999 } 00:18:49.999 ] 00:18:49.999 }' 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.999 04:07:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.574 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.574 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.574 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.574 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.574 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.575 "name": "raid_bdev1", 00:18:50.575 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:50.575 "strip_size_kb": 0, 00:18:50.575 "state": "online", 00:18:50.575 "raid_level": "raid1", 00:18:50.575 "superblock": true, 00:18:50.575 "num_base_bdevs": 2, 00:18:50.575 "num_base_bdevs_discovered": 1, 00:18:50.575 "num_base_bdevs_operational": 1, 00:18:50.575 "base_bdevs_list": [ 00:18:50.575 { 00:18:50.575 "name": null, 00:18:50.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.575 "is_configured": false, 00:18:50.575 "data_offset": 0, 00:18:50.575 "data_size": 7936 00:18:50.575 }, 00:18:50.575 { 00:18:50.575 "name": "BaseBdev2", 00:18:50.575 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:50.575 "is_configured": true, 00:18:50.575 "data_offset": 256, 00:18:50.575 "data_size": 7936 00:18:50.575 } 00:18:50.575 ] 00:18:50.575 }' 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.575 [2024-11-18 04:07:47.178425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.575 [2024-11-18 04:07:47.178552] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.575 [2024-11-18 04:07:47.178568] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:50.575 request: 00:18:50.575 { 00:18:50.575 "base_bdev": "BaseBdev1", 00:18:50.575 "raid_bdev": "raid_bdev1", 00:18:50.575 "method": "bdev_raid_add_base_bdev", 00:18:50.575 "req_id": 1 00:18:50.575 } 00:18:50.575 Got JSON-RPC error response 00:18:50.575 response: 00:18:50.575 { 00:18:50.575 "code": -22, 00:18:50.575 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:50.575 } 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.575 04:07:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.970 "name": "raid_bdev1", 00:18:51.970 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:51.970 "strip_size_kb": 0, 00:18:51.970 "state": "online", 00:18:51.970 "raid_level": "raid1", 00:18:51.970 "superblock": true, 00:18:51.970 "num_base_bdevs": 2, 00:18:51.970 "num_base_bdevs_discovered": 1, 00:18:51.970 "num_base_bdevs_operational": 1, 00:18:51.970 "base_bdevs_list": [ 00:18:51.970 { 00:18:51.970 "name": null, 00:18:51.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.970 "is_configured": false, 00:18:51.970 "data_offset": 0, 00:18:51.970 "data_size": 7936 00:18:51.970 }, 00:18:51.970 { 00:18:51.970 "name": "BaseBdev2", 00:18:51.970 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:51.970 "is_configured": true, 00:18:51.970 "data_offset": 256, 00:18:51.970 "data_size": 7936 00:18:51.970 } 00:18:51.970 ] 00:18:51.970 }' 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.970 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.230 "name": "raid_bdev1", 00:18:52.230 "uuid": "0e16a496-c625-4a60-8656-ac268df47596", 00:18:52.230 "strip_size_kb": 0, 00:18:52.230 "state": "online", 00:18:52.230 "raid_level": "raid1", 00:18:52.230 "superblock": true, 00:18:52.230 "num_base_bdevs": 2, 00:18:52.230 "num_base_bdevs_discovered": 1, 00:18:52.230 "num_base_bdevs_operational": 1, 00:18:52.230 "base_bdevs_list": [ 00:18:52.230 { 00:18:52.230 "name": null, 00:18:52.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.230 "is_configured": false, 00:18:52.230 "data_offset": 0, 00:18:52.230 "data_size": 7936 00:18:52.230 }, 00:18:52.230 { 00:18:52.230 "name": "BaseBdev2", 00:18:52.230 "uuid": "43d3a4a5-3338-5653-a3ae-e8b7eb506b75", 00:18:52.230 "is_configured": true, 00:18:52.230 "data_offset": 256, 00:18:52.230 "data_size": 7936 00:18:52.230 } 00:18:52.230 ] 00:18:52.230 }' 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88925 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88925 ']' 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88925 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88925 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.230 killing process with pid 88925 00:18:52.230 Received shutdown signal, test time was about 60.000000 seconds 00:18:52.230 00:18:52.230 Latency(us) 00:18:52.230 [2024-11-18T04:07:48.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.230 [2024-11-18T04:07:48.871Z] =================================================================================================================== 00:18:52.230 [2024-11-18T04:07:48.871Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88925' 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88925 00:18:52.230 [2024-11-18 04:07:48.779171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:52.230 [2024-11-18 04:07:48.779273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.230 [2024-11-18 04:07:48.779310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.230 [2024-11-18 04:07:48.779321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:52.230 04:07:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88925 00:18:52.490 [2024-11-18 04:07:49.058599] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:53.429 ************************************ 00:18:53.429 END TEST raid_rebuild_test_sb_md_interleaved 00:18:53.429 ************************************ 00:18:53.429 04:07:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:53.429 00:18:53.429 real 0m17.470s 00:18:53.429 user 0m22.944s 00:18:53.429 sys 0m1.672s 00:18:53.429 04:07:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.429 04:07:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.689 04:07:50 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:53.689 04:07:50 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:53.689 04:07:50 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88925 ']' 00:18:53.689 04:07:50 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88925 00:18:53.689 04:07:50 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:53.689 00:18:53.689 real 11m53.258s 00:18:53.689 user 16m0.670s 00:18:53.689 sys 1m54.799s 00:18:53.689 ************************************ 00:18:53.689 END TEST bdev_raid 00:18:53.689 ************************************ 00:18:53.689 04:07:50 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.689 04:07:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.689 04:07:50 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:53.689 04:07:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:53.689 04:07:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.689 04:07:50 -- common/autotest_common.sh@10 -- # set +x 00:18:53.689 ************************************ 00:18:53.689 START TEST spdkcli_raid 00:18:53.689 ************************************ 00:18:53.689 04:07:50 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:53.950 * Looking for test storage... 00:18:53.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:53.950 04:07:50 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:53.950 04:07:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:53.950 04:07:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:53.950 04:07:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.950 04:07:50 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:53.950 04:07:50 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.950 04:07:50 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:53.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.950 --rc genhtml_branch_coverage=1 00:18:53.950 --rc genhtml_function_coverage=1 00:18:53.950 --rc genhtml_legend=1 00:18:53.950 --rc geninfo_all_blocks=1 00:18:53.950 --rc geninfo_unexecuted_blocks=1 00:18:53.950 00:18:53.950 ' 00:18:53.950 04:07:50 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:53.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.950 --rc genhtml_branch_coverage=1 00:18:53.950 --rc genhtml_function_coverage=1 00:18:53.950 --rc genhtml_legend=1 00:18:53.950 --rc geninfo_all_blocks=1 00:18:53.950 --rc geninfo_unexecuted_blocks=1 00:18:53.950 00:18:53.950 ' 00:18:53.950 04:07:50 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:53.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.950 --rc genhtml_branch_coverage=1 00:18:53.950 --rc genhtml_function_coverage=1 00:18:53.950 --rc genhtml_legend=1 00:18:53.950 --rc geninfo_all_blocks=1 00:18:53.950 --rc geninfo_unexecuted_blocks=1 00:18:53.950 00:18:53.950 ' 00:18:53.950 04:07:50 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:53.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.950 --rc genhtml_branch_coverage=1 00:18:53.950 --rc genhtml_function_coverage=1 00:18:53.950 --rc genhtml_legend=1 00:18:53.950 --rc geninfo_all_blocks=1 00:18:53.950 --rc geninfo_unexecuted_blocks=1 00:18:53.950 00:18:53.950 ' 00:18:53.950 04:07:50 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:53.950 04:07:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:53.951 04:07:50 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:53.951 04:07:50 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.951 04:07:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89611 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:53.951 04:07:50 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89611 00:18:53.951 04:07:50 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89611 ']' 00:18:53.951 04:07:50 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.951 04:07:50 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.951 04:07:50 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.951 04:07:50 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.951 04:07:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.951 [2024-11-18 04:07:50.562313] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:53.951 [2024-11-18 04:07:50.562510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89611 ] 00:18:54.211 [2024-11-18 04:07:50.736338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:54.211 [2024-11-18 04:07:50.841725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.212 [2024-11-18 04:07:50.841756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.152 04:07:51 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.152 04:07:51 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:55.152 04:07:51 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:55.152 04:07:51 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:55.152 04:07:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.152 04:07:51 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:55.153 04:07:51 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.153 04:07:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.153 04:07:51 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:55.153 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:55.153 ' 00:18:57.063 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:57.063 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:57.063 04:07:53 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:57.063 04:07:53 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.063 04:07:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.063 04:07:53 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:57.063 04:07:53 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.063 04:07:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.063 04:07:53 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:57.063 ' 00:18:58.002 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:58.002 04:07:54 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:58.002 04:07:54 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.002 04:07:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.002 04:07:54 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:58.002 04:07:54 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.002 04:07:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.002 04:07:54 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:58.002 04:07:54 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:58.573 04:07:55 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:58.573 04:07:55 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:58.573 04:07:55 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:58.573 04:07:55 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.573 04:07:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.573 04:07:55 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:58.573 04:07:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.573 04:07:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.573 04:07:55 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:58.573 ' 00:18:59.513 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:59.773 04:07:56 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:59.773 04:07:56 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:59.773 04:07:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:59.773 04:07:56 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:59.773 04:07:56 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.773 04:07:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:59.773 04:07:56 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:59.773 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:59.773 ' 00:19:01.154 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:01.154 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:01.415 04:07:57 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:01.415 04:07:57 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89611 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89611 ']' 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89611 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89611 00:19:01.415 killing process with pid 89611 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89611' 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89611 00:19:01.415 04:07:57 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89611 00:19:03.956 04:08:00 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:03.956 04:08:00 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89611 ']' 00:19:03.956 04:08:00 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89611 00:19:03.956 04:08:00 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89611 ']' 00:19:03.956 Process with pid 89611 is not found 00:19:03.956 04:08:00 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89611 00:19:03.956 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89611) - No such process 00:19:03.956 04:08:00 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89611 is not found' 00:19:03.956 04:08:00 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:03.956 04:08:00 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:03.956 04:08:00 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:03.956 04:08:00 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:03.956 00:19:03.956 real 0m9.996s 00:19:03.956 user 0m20.553s 00:19:03.956 sys 0m1.153s 00:19:03.956 04:08:00 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.956 04:08:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:03.956 ************************************ 00:19:03.956 END TEST spdkcli_raid 00:19:03.956 ************************************ 00:19:03.956 04:08:00 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:03.956 04:08:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:03.956 04:08:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.956 04:08:00 -- common/autotest_common.sh@10 -- # set +x 00:19:03.956 ************************************ 00:19:03.956 START TEST blockdev_raid5f 00:19:03.956 ************************************ 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:03.956 * Looking for test storage... 00:19:03.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.956 04:08:00 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:03.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.956 --rc genhtml_branch_coverage=1 00:19:03.956 --rc genhtml_function_coverage=1 00:19:03.956 --rc genhtml_legend=1 00:19:03.956 --rc geninfo_all_blocks=1 00:19:03.956 --rc geninfo_unexecuted_blocks=1 00:19:03.956 00:19:03.956 ' 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:03.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.956 --rc genhtml_branch_coverage=1 00:19:03.956 --rc genhtml_function_coverage=1 00:19:03.956 --rc genhtml_legend=1 00:19:03.956 --rc geninfo_all_blocks=1 00:19:03.956 --rc geninfo_unexecuted_blocks=1 00:19:03.956 00:19:03.956 ' 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:03.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.956 --rc genhtml_branch_coverage=1 00:19:03.956 --rc genhtml_function_coverage=1 00:19:03.956 --rc genhtml_legend=1 00:19:03.956 --rc geninfo_all_blocks=1 00:19:03.956 --rc geninfo_unexecuted_blocks=1 00:19:03.956 00:19:03.956 ' 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:03.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.956 --rc genhtml_branch_coverage=1 00:19:03.956 --rc genhtml_function_coverage=1 00:19:03.956 --rc genhtml_legend=1 00:19:03.956 --rc geninfo_all_blocks=1 00:19:03.956 --rc geninfo_unexecuted_blocks=1 00:19:03.956 00:19:03.956 ' 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89896 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:03.956 04:08:00 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89896 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89896 ']' 00:19:03.956 04:08:00 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.957 04:08:00 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.957 04:08:00 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.957 04:08:00 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.957 04:08:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:04.217 [2024-11-18 04:08:00.630001] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:04.217 [2024-11-18 04:08:00.630124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89896 ] 00:19:04.217 [2024-11-18 04:08:00.808999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.477 [2024-11-18 04:08:00.916422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:05.416 04:08:01 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:05.416 04:08:01 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:05.416 04:08:01 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:05.416 Malloc0 00:19:05.416 Malloc1 00:19:05.416 Malloc2 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.416 04:08:01 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.416 04:08:01 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:05.416 04:08:01 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.416 04:08:01 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.416 04:08:01 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.416 04:08:01 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:05.416 04:08:01 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:05.416 04:08:01 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.416 04:08:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:05.416 04:08:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.416 04:08:02 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:05.416 04:08:02 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "dd4ecc20-07c8-45aa-9033-254f3061c6fc"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dd4ecc20-07c8-45aa-9033-254f3061c6fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "dd4ecc20-07c8-45aa-9033-254f3061c6fc",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "eb926eb9-8ce0-4fed-909b-150b81c50b35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c7a82f81-c2b1-4f5c-8c96-3a911d4f24ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "91157486-b9b4-442a-9b09-257589ac8cf8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:05.416 04:08:02 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:05.677 04:08:02 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:05.677 04:08:02 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:05.677 04:08:02 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:05.677 04:08:02 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89896 00:19:05.677 04:08:02 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89896 ']' 00:19:05.677 04:08:02 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89896 00:19:05.677 04:08:02 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:05.677 04:08:02 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.677 04:08:02 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89896 00:19:05.677 04:08:02 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:05.677 04:08:02 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:05.677 killing process with pid 89896 00:19:05.677 04:08:02 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89896' 00:19:05.677 04:08:02 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89896 00:19:05.677 04:08:02 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89896 00:19:08.219 04:08:04 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:08.219 04:08:04 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:08.219 04:08:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:08.219 04:08:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.219 04:08:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:08.219 ************************************ 00:19:08.219 START TEST bdev_hello_world 00:19:08.219 ************************************ 00:19:08.219 04:08:04 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:08.219 [2024-11-18 04:08:04.728412] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:08.219 [2024-11-18 04:08:04.728520] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89959 ] 00:19:08.479 [2024-11-18 04:08:04.902859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.479 [2024-11-18 04:08:05.012256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.049 [2024-11-18 04:08:05.513798] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:09.049 [2024-11-18 04:08:05.513853] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:09.049 [2024-11-18 04:08:05.513870] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:09.049 [2024-11-18 04:08:05.514315] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:09.049 [2024-11-18 04:08:05.514439] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:09.049 [2024-11-18 04:08:05.514457] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:09.049 [2024-11-18 04:08:05.514500] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:09.049 00:19:09.049 [2024-11-18 04:08:05.514517] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:10.431 00:19:10.431 real 0m2.148s 00:19:10.431 user 0m1.775s 00:19:10.431 sys 0m0.256s 00:19:10.431 04:08:06 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.431 04:08:06 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:10.431 ************************************ 00:19:10.431 END TEST bdev_hello_world 00:19:10.431 ************************************ 00:19:10.431 04:08:06 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:10.431 04:08:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:10.431 04:08:06 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.431 04:08:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:10.431 ************************************ 00:19:10.431 START TEST bdev_bounds 00:19:10.431 ************************************ 00:19:10.431 04:08:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:10.431 04:08:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90005 00:19:10.431 04:08:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:10.431 04:08:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:10.431 Process bdevio pid: 90005 00:19:10.431 04:08:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90005' 00:19:10.431 04:08:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90005 00:19:10.431 04:08:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90005 ']' 00:19:10.431 04:08:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.431 04:08:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.431 04:08:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.432 04:08:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.432 04:08:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:10.432 [2024-11-18 04:08:06.955724] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:10.432 [2024-11-18 04:08:06.955887] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90005 ] 00:19:10.692 [2024-11-18 04:08:07.134504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:10.692 [2024-11-18 04:08:07.243943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.692 [2024-11-18 04:08:07.244111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.692 [2024-11-18 04:08:07.244207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.261 04:08:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.261 04:08:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:11.261 04:08:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:11.261 I/O targets: 00:19:11.261 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:11.261 00:19:11.261 00:19:11.261 CUnit - A unit testing framework for C - Version 2.1-3 00:19:11.261 http://cunit.sourceforge.net/ 00:19:11.261 00:19:11.261 00:19:11.261 Suite: bdevio tests on: raid5f 00:19:11.261 Test: blockdev write read block ...passed 00:19:11.261 Test: blockdev write zeroes read block ...passed 00:19:11.261 Test: blockdev write zeroes read no split ...passed 00:19:11.521 Test: blockdev write zeroes read split ...passed 00:19:11.521 Test: blockdev write zeroes read split partial ...passed 00:19:11.521 Test: blockdev reset ...passed 00:19:11.521 Test: blockdev write read 8 blocks ...passed 00:19:11.521 Test: blockdev write read size > 128k ...passed 00:19:11.521 Test: blockdev write read invalid size ...passed 00:19:11.521 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:11.521 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:11.521 Test: blockdev write read max offset ...passed 00:19:11.521 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:11.521 Test: blockdev writev readv 8 blocks ...passed 00:19:11.521 Test: blockdev writev readv 30 x 1block ...passed 00:19:11.521 Test: blockdev writev readv block ...passed 00:19:11.521 Test: blockdev writev readv size > 128k ...passed 00:19:11.521 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:11.521 Test: blockdev comparev and writev ...passed 00:19:11.521 Test: blockdev nvme passthru rw ...passed 00:19:11.521 Test: blockdev nvme passthru vendor specific ...passed 00:19:11.521 Test: blockdev nvme admin passthru ...passed 00:19:11.521 Test: blockdev copy ...passed 00:19:11.521 00:19:11.521 Run Summary: Type Total Ran Passed Failed Inactive 00:19:11.521 suites 1 1 n/a 0 0 00:19:11.521 tests 23 23 23 0 0 00:19:11.521 asserts 130 130 130 0 n/a 00:19:11.521 00:19:11.521 Elapsed time = 0.609 seconds 00:19:11.521 0 00:19:11.521 04:08:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90005 00:19:11.521 04:08:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90005 ']' 00:19:11.521 04:08:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90005 00:19:11.521 04:08:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:11.521 04:08:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.521 04:08:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90005 00:19:11.780 04:08:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.780 04:08:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.780 killing process with pid 90005 00:19:11.780 04:08:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90005' 00:19:11.780 04:08:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90005 00:19:11.780 04:08:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90005 00:19:13.163 04:08:09 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:13.163 00:19:13.163 real 0m2.607s 00:19:13.163 user 0m6.433s 00:19:13.163 sys 0m0.403s 00:19:13.163 04:08:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.163 04:08:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:13.163 ************************************ 00:19:13.163 END TEST bdev_bounds 00:19:13.163 ************************************ 00:19:13.163 04:08:09 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:13.163 04:08:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:13.163 04:08:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.163 04:08:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:13.163 ************************************ 00:19:13.163 START TEST bdev_nbd 00:19:13.163 ************************************ 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90065 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90065 /var/tmp/spdk-nbd.sock 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90065 ']' 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.163 04:08:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:13.163 [2024-11-18 04:08:09.642623] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:13.163 [2024-11-18 04:08:09.642749] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.423 [2024-11-18 04:08:09.820448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.423 [2024-11-18 04:08:09.926218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:13.996 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:14.278 1+0 records in 00:19:14.278 1+0 records out 00:19:14.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376345 s, 10.9 MB/s 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:14.278 { 00:19:14.278 "nbd_device": "/dev/nbd0", 00:19:14.278 "bdev_name": "raid5f" 00:19:14.278 } 00:19:14.278 ]' 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:14.278 { 00:19:14.278 "nbd_device": "/dev/nbd0", 00:19:14.278 "bdev_name": "raid5f" 00:19:14.278 } 00:19:14.278 ]' 00:19:14.278 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:14.559 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:14.559 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:14.559 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:14.559 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:14.559 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:14.559 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.559 04:08:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:14.559 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:14.559 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:14.559 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:14.559 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.559 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.559 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:14.559 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:14.559 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.559 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:14.559 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:14.559 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:14.818 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:14.818 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:14.818 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:14.818 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:14.818 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:14.818 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:14.819 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:15.079 /dev/nbd0 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:15.079 1+0 records in 00:19:15.079 1+0 records out 00:19:15.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436223 s, 9.4 MB/s 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:15.079 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:15.339 { 00:19:15.339 "nbd_device": "/dev/nbd0", 00:19:15.339 "bdev_name": "raid5f" 00:19:15.339 } 00:19:15.339 ]' 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:15.339 { 00:19:15.339 "nbd_device": "/dev/nbd0", 00:19:15.339 "bdev_name": "raid5f" 00:19:15.339 } 00:19:15.339 ]' 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:15.339 256+0 records in 00:19:15.339 256+0 records out 00:19:15.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587846 s, 178 MB/s 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:15.339 256+0 records in 00:19:15.339 256+0 records out 00:19:15.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269574 s, 38.9 MB/s 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:15.339 04:08:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:15.599 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:15.599 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:15.599 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:15.599 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:15.599 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:15.599 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:15.599 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:15.599 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:15.599 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:15.599 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:15.599 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:15.860 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:16.120 malloc_lvol_verify 00:19:16.120 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:16.381 8e6a3f70-c489-4f6f-ac5f-5ab4704648dd 00:19:16.381 04:08:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:16.381 03d6c1bd-3d7a-4757-9313-36399e7ec9cc 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:16.642 /dev/nbd0 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:16.642 mke2fs 1.47.0 (5-Feb-2023) 00:19:16.642 Discarding device blocks: 0/4096 done 00:19:16.642 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:16.642 00:19:16.642 Allocating group tables: 0/1 done 00:19:16.642 Writing inode tables: 0/1 done 00:19:16.642 Creating journal (1024 blocks): done 00:19:16.642 Writing superblocks and filesystem accounting information: 0/1 done 00:19:16.642 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:16.642 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:16.903 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:16.903 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90065 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90065 ']' 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90065 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90065 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.904 killing process with pid 90065 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90065' 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90065 00:19:16.904 04:08:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90065 00:19:18.287 04:08:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:18.287 00:19:18.287 real 0m5.359s 00:19:18.287 user 0m7.288s 00:19:18.287 sys 0m1.237s 00:19:18.287 04:08:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.287 04:08:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:18.287 ************************************ 00:19:18.287 END TEST bdev_nbd 00:19:18.287 ************************************ 00:19:18.547 04:08:14 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:18.547 04:08:14 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:18.548 04:08:14 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:18.548 04:08:14 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:18.548 04:08:14 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:18.548 04:08:14 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.548 04:08:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:18.548 ************************************ 00:19:18.548 START TEST bdev_fio 00:19:18.548 ************************************ 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:18.548 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:18.548 04:08:14 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:18.548 ************************************ 00:19:18.548 START TEST bdev_fio_rw_verify 00:19:18.548 ************************************ 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:18.548 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:18.807 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:18.807 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:18.807 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:18.807 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:18.807 04:08:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:18.807 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:18.807 fio-3.35 00:19:18.807 Starting 1 thread 00:19:31.028 00:19:31.028 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90265: Mon Nov 18 04:08:26 2024 00:19:31.028 read: IOPS=12.6k, BW=49.2MiB/s (51.6MB/s)(492MiB/10001msec) 00:19:31.028 slat (usec): min=16, max=258, avg=18.65, stdev= 3.19 00:19:31.029 clat (usec): min=10, max=956, avg=126.75, stdev=46.88 00:19:31.029 lat (usec): min=29, max=1214, avg=145.40, stdev=47.84 00:19:31.029 clat percentiles (usec): 00:19:31.029 | 50.000th=[ 130], 99.000th=[ 208], 99.900th=[ 396], 99.990th=[ 865], 00:19:31.029 | 99.999th=[ 914] 00:19:31.029 write: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(509MiB/9879msec); 0 zone resets 00:19:31.029 slat (usec): min=7, max=293, avg=15.99, stdev= 4.00 00:19:31.029 clat (usec): min=57, max=2545, avg=292.65, stdev=43.04 00:19:31.029 lat (usec): min=72, max=2561, avg=308.65, stdev=43.98 00:19:31.029 clat percentiles (usec): 00:19:31.029 | 50.000th=[ 297], 99.000th=[ 367], 99.900th=[ 611], 99.990th=[ 1020], 00:19:31.029 | 99.999th=[ 2540] 00:19:31.029 bw ( KiB/s): min=46488, max=54992, per=98.93%, avg=52239.58, stdev=2181.80, samples=19 00:19:31.029 iops : min=11622, max=13748, avg=13059.89, stdev=545.45, samples=19 00:19:31.029 lat (usec) : 20=0.01%, 50=0.01%, 100=16.94%, 250=39.08%, 500=43.84% 00:19:31.029 lat (usec) : 750=0.10%, 1000=0.02% 00:19:31.029 lat (msec) : 2=0.01%, 4=0.01% 00:19:31.029 cpu : usr=98.78%, sys=0.49%, ctx=32, majf=0, minf=10293 00:19:31.029 IO depths : 1=7.6%, 2=19.9%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.029 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.029 issued rwts: total=126058,130414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.029 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:31.029 00:19:31.029 Run status group 0 (all jobs): 00:19:31.029 READ: bw=49.2MiB/s (51.6MB/s), 49.2MiB/s-49.2MiB/s (51.6MB/s-51.6MB/s), io=492MiB (516MB), run=10001-10001msec 00:19:31.029 WRITE: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=509MiB (534MB), run=9879-9879msec 00:19:31.289 ----------------------------------------------------- 00:19:31.289 Suppressions used: 00:19:31.289 count bytes template 00:19:31.289 1 7 /usr/src/fio/parse.c 00:19:31.289 436 41856 /usr/src/fio/iolog.c 00:19:31.289 1 8 libtcmalloc_minimal.so 00:19:31.289 1 904 libcrypto.so 00:19:31.289 ----------------------------------------------------- 00:19:31.289 00:19:31.289 00:19:31.289 real 0m12.689s 00:19:31.289 user 0m13.009s 00:19:31.289 sys 0m0.655s 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:31.289 ************************************ 00:19:31.289 END TEST bdev_fio_rw_verify 00:19:31.289 ************************************ 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:31.289 04:08:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "dd4ecc20-07c8-45aa-9033-254f3061c6fc"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "dd4ecc20-07c8-45aa-9033-254f3061c6fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "dd4ecc20-07c8-45aa-9033-254f3061c6fc",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "eb926eb9-8ce0-4fed-909b-150b81c50b35",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "c7a82f81-c2b1-4f5c-8c96-3a911d4f24ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "91157486-b9b4-442a-9b09-257589ac8cf8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:31.550 04:08:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:31.550 04:08:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:31.550 /home/vagrant/spdk_repo/spdk 00:19:31.550 04:08:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:31.550 04:08:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:31.550 04:08:27 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:31.550 00:19:31.550 real 0m13.001s 00:19:31.550 user 0m13.152s 00:19:31.550 sys 0m0.790s 00:19:31.550 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.550 04:08:27 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:31.550 ************************************ 00:19:31.550 END TEST bdev_fio 00:19:31.550 ************************************ 00:19:31.550 04:08:28 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:31.550 04:08:28 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:31.550 04:08:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:31.550 04:08:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.550 04:08:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:31.550 ************************************ 00:19:31.550 START TEST bdev_verify 00:19:31.550 ************************************ 00:19:31.550 04:08:28 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:31.550 [2024-11-18 04:08:28.139994] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:31.550 [2024-11-18 04:08:28.140119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90432 ] 00:19:31.810 [2024-11-18 04:08:28.315503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:31.810 [2024-11-18 04:08:28.429033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.810 [2024-11-18 04:08:28.429067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.380 Running I/O for 5 seconds... 00:19:34.701 10723.00 IOPS, 41.89 MiB/s [2024-11-18T04:08:32.282Z] 10789.50 IOPS, 42.15 MiB/s [2024-11-18T04:08:33.221Z] 10844.00 IOPS, 42.36 MiB/s [2024-11-18T04:08:34.162Z] 10845.00 IOPS, 42.36 MiB/s [2024-11-18T04:08:34.162Z] 10845.60 IOPS, 42.37 MiB/s 00:19:37.521 Latency(us) 00:19:37.521 [2024-11-18T04:08:34.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.521 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:37.521 Verification LBA range: start 0x0 length 0x2000 00:19:37.521 raid5f : 5.02 4388.39 17.14 0.00 0.00 43935.10 147.56 30678.86 00:19:37.521 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:37.521 Verification LBA range: start 0x2000 length 0x2000 00:19:37.521 raid5f : 5.02 6444.99 25.18 0.00 0.00 29888.63 245.04 21749.94 00:19:37.521 [2024-11-18T04:08:34.162Z] =================================================================================================================== 00:19:37.521 [2024-11-18T04:08:34.162Z] Total : 10833.38 42.32 0.00 0.00 35582.55 147.56 30678.86 00:19:38.903 00:19:38.903 real 0m7.212s 00:19:38.903 user 0m13.349s 00:19:38.903 sys 0m0.271s 00:19:38.904 04:08:35 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.904 04:08:35 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:38.904 ************************************ 00:19:38.904 END TEST bdev_verify 00:19:38.904 ************************************ 00:19:38.904 04:08:35 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:38.904 04:08:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:38.904 04:08:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.904 04:08:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:38.904 ************************************ 00:19:38.904 START TEST bdev_verify_big_io 00:19:38.904 ************************************ 00:19:38.904 04:08:35 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:38.904 [2024-11-18 04:08:35.423290] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:38.904 [2024-11-18 04:08:35.423414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90527 ] 00:19:39.164 [2024-11-18 04:08:35.597568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:39.164 [2024-11-18 04:08:35.701637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.164 [2024-11-18 04:08:35.701672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.735 Running I/O for 5 seconds... 00:19:42.056 633.00 IOPS, 39.56 MiB/s [2024-11-18T04:08:39.637Z] 761.00 IOPS, 47.56 MiB/s [2024-11-18T04:08:40.575Z] 782.00 IOPS, 48.88 MiB/s [2024-11-18T04:08:41.568Z] 793.25 IOPS, 49.58 MiB/s [2024-11-18T04:08:41.568Z] 812.20 IOPS, 50.76 MiB/s 00:19:44.927 Latency(us) 00:19:44.927 [2024-11-18T04:08:41.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.927 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:44.927 Verification LBA range: start 0x0 length 0x200 00:19:44.927 raid5f : 5.30 359.23 22.45 0.00 0.00 8852975.67 223.58 384630.50 00:19:44.927 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:44.927 Verification LBA range: start 0x200 length 0x200 00:19:44.927 raid5f : 5.13 445.70 27.86 0.00 0.00 7191040.85 127.89 309535.97 00:19:44.927 [2024-11-18T04:08:41.568Z] =================================================================================================================== 00:19:44.927 [2024-11-18T04:08:41.568Z] Total : 804.93 50.31 0.00 0.00 7946032.96 127.89 384630.50 00:19:46.324 00:19:46.324 real 0m7.496s 00:19:46.324 user 0m13.931s 00:19:46.324 sys 0m0.267s 00:19:46.324 04:08:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.324 04:08:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.324 ************************************ 00:19:46.324 END TEST bdev_verify_big_io 00:19:46.324 ************************************ 00:19:46.324 04:08:42 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.324 04:08:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:46.324 04:08:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.324 04:08:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.324 ************************************ 00:19:46.324 START TEST bdev_write_zeroes 00:19:46.324 ************************************ 00:19:46.324 04:08:42 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.585 [2024-11-18 04:08:42.994501] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:46.585 [2024-11-18 04:08:42.994613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90620 ] 00:19:46.585 [2024-11-18 04:08:43.168421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.844 [2024-11-18 04:08:43.266092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.415 Running I/O for 1 seconds... 00:19:48.355 30519.00 IOPS, 119.21 MiB/s 00:19:48.355 Latency(us) 00:19:48.355 [2024-11-18T04:08:44.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.355 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.355 raid5f : 1.01 30485.15 119.08 0.00 0.00 4185.01 1323.60 5695.05 00:19:48.355 [2024-11-18T04:08:44.996Z] =================================================================================================================== 00:19:48.355 [2024-11-18T04:08:44.996Z] Total : 30485.15 119.08 0.00 0.00 4185.01 1323.60 5695.05 00:19:49.738 00:19:49.738 real 0m3.154s 00:19:49.738 user 0m2.771s 00:19:49.738 sys 0m0.257s 00:19:49.738 04:08:46 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:49.738 04:08:46 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:49.738 ************************************ 00:19:49.738 END TEST bdev_write_zeroes 00:19:49.738 ************************************ 00:19:49.738 04:08:46 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:49.738 04:08:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:49.738 04:08:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.738 04:08:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:49.738 ************************************ 00:19:49.738 START TEST bdev_json_nonenclosed 00:19:49.738 ************************************ 00:19:49.738 04:08:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:49.738 [2024-11-18 04:08:46.234237] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:49.738 [2024-11-18 04:08:46.234352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90673 ] 00:19:49.999 [2024-11-18 04:08:46.412398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.999 [2024-11-18 04:08:46.523375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.999 [2024-11-18 04:08:46.523466] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:49.999 [2024-11-18 04:08:46.523509] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:49.999 [2024-11-18 04:08:46.523519] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:50.259 00:19:50.259 real 0m0.623s 00:19:50.259 user 0m0.383s 00:19:50.259 sys 0m0.136s 00:19:50.259 04:08:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.259 04:08:46 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:50.259 ************************************ 00:19:50.259 END TEST bdev_json_nonenclosed 00:19:50.259 ************************************ 00:19:50.259 04:08:46 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:50.259 04:08:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:50.259 04:08:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.259 04:08:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:50.259 ************************************ 00:19:50.259 START TEST bdev_json_nonarray 00:19:50.259 ************************************ 00:19:50.259 04:08:46 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:50.519 [2024-11-18 04:08:46.934518] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:50.519 [2024-11-18 04:08:46.934642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90704 ] 00:19:50.519 [2024-11-18 04:08:47.112809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.779 [2024-11-18 04:08:47.218242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.779 [2024-11-18 04:08:47.218348] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:50.779 [2024-11-18 04:08:47.218366] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:50.779 [2024-11-18 04:08:47.218384] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:51.040 00:19:51.040 real 0m0.618s 00:19:51.040 user 0m0.373s 00:19:51.040 sys 0m0.141s 00:19:51.040 04:08:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.040 04:08:47 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:51.040 ************************************ 00:19:51.040 END TEST bdev_json_nonarray 00:19:51.040 ************************************ 00:19:51.040 04:08:47 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:51.040 04:08:47 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:51.040 04:08:47 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:51.040 04:08:47 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:51.040 04:08:47 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:51.040 04:08:47 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:51.040 04:08:47 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:51.040 04:08:47 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:51.040 04:08:47 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:51.040 04:08:47 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:51.040 04:08:47 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:51.040 00:19:51.040 real 0m47.246s 00:19:51.040 user 1m3.854s 00:19:51.040 sys 0m4.873s 00:19:51.040 04:08:47 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.040 04:08:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:51.040 ************************************ 00:19:51.040 END TEST blockdev_raid5f 00:19:51.040 ************************************ 00:19:51.040 04:08:47 -- spdk/autotest.sh@194 -- # uname -s 00:19:51.040 04:08:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:51.040 04:08:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:51.040 04:08:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:51.040 04:08:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:51.040 04:08:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.040 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.040 04:08:47 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:51.040 04:08:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:51.040 04:08:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:51.040 04:08:47 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:51.040 04:08:47 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:51.040 04:08:47 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:51.040 04:08:47 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:51.040 04:08:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.040 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:19:51.040 04:08:47 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:51.040 04:08:47 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:51.040 04:08:47 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:51.040 04:08:47 -- common/autotest_common.sh@10 -- # set +x 00:19:53.583 INFO: APP EXITING 00:19:53.583 INFO: killing all VMs 00:19:53.583 INFO: killing vhost app 00:19:53.583 INFO: EXIT DONE 00:19:54.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:54.154 Waiting for block devices as requested 00:19:54.154 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:54.154 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:55.095 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:55.355 Cleaning 00:19:55.355 Removing: /var/run/dpdk/spdk0/config 00:19:55.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:55.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:55.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:55.355 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:55.355 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:55.355 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:55.355 Removing: /dev/shm/spdk_tgt_trace.pid56879 00:19:55.355 Removing: /var/run/dpdk/spdk0 00:19:55.355 Removing: /var/run/dpdk/spdk_pid56633 00:19:55.355 Removing: /var/run/dpdk/spdk_pid56879 00:19:55.355 Removing: /var/run/dpdk/spdk_pid57114 00:19:55.355 Removing: /var/run/dpdk/spdk_pid57218 00:19:55.355 Removing: /var/run/dpdk/spdk_pid57274 00:19:55.355 Removing: /var/run/dpdk/spdk_pid57402 00:19:55.355 Removing: /var/run/dpdk/spdk_pid57426 00:19:55.355 Removing: /var/run/dpdk/spdk_pid57630 00:19:55.355 Removing: /var/run/dpdk/spdk_pid57746 00:19:55.355 Removing: /var/run/dpdk/spdk_pid57849 00:19:55.356 Removing: /var/run/dpdk/spdk_pid57975 00:19:55.356 Removing: /var/run/dpdk/spdk_pid58084 00:19:55.356 Removing: /var/run/dpdk/spdk_pid58124 00:19:55.356 Removing: /var/run/dpdk/spdk_pid58166 00:19:55.356 Removing: /var/run/dpdk/spdk_pid58236 00:19:55.356 Removing: /var/run/dpdk/spdk_pid58359 00:19:55.356 Removing: /var/run/dpdk/spdk_pid58801 00:19:55.356 Removing: /var/run/dpdk/spdk_pid58881 00:19:55.356 Removing: /var/run/dpdk/spdk_pid58961 00:19:55.356 Removing: /var/run/dpdk/spdk_pid58977 00:19:55.356 Removing: /var/run/dpdk/spdk_pid59128 00:19:55.356 Removing: /var/run/dpdk/spdk_pid59156 00:19:55.356 Removing: /var/run/dpdk/spdk_pid59307 00:19:55.356 Removing: /var/run/dpdk/spdk_pid59329 00:19:55.356 Removing: /var/run/dpdk/spdk_pid59403 00:19:55.356 Removing: /var/run/dpdk/spdk_pid59422 00:19:55.356 Removing: /var/run/dpdk/spdk_pid59493 00:19:55.356 Removing: /var/run/dpdk/spdk_pid59514 00:19:55.356 Removing: /var/run/dpdk/spdk_pid59709 00:19:55.356 Removing: /var/run/dpdk/spdk_pid59751 00:19:55.356 Removing: /var/run/dpdk/spdk_pid59839 00:19:55.356 Removing: /var/run/dpdk/spdk_pid61192 00:19:55.356 Removing: /var/run/dpdk/spdk_pid61403 00:19:55.356 Removing: /var/run/dpdk/spdk_pid61549 00:19:55.356 Removing: /var/run/dpdk/spdk_pid62189 00:19:55.356 Removing: /var/run/dpdk/spdk_pid62398 00:19:55.356 Removing: /var/run/dpdk/spdk_pid62545 00:19:55.356 Removing: /var/run/dpdk/spdk_pid63188 00:19:55.356 Removing: /var/run/dpdk/spdk_pid63517 00:19:55.356 Removing: /var/run/dpdk/spdk_pid63664 00:19:55.356 Removing: /var/run/dpdk/spdk_pid65049 00:19:55.356 Removing: /var/run/dpdk/spdk_pid65302 00:19:55.356 Removing: /var/run/dpdk/spdk_pid65443 00:19:55.356 Removing: /var/run/dpdk/spdk_pid66827 00:19:55.616 Removing: /var/run/dpdk/spdk_pid67090 00:19:55.616 Removing: /var/run/dpdk/spdk_pid67230 00:19:55.616 Removing: /var/run/dpdk/spdk_pid68625 00:19:55.616 Removing: /var/run/dpdk/spdk_pid69072 00:19:55.616 Removing: /var/run/dpdk/spdk_pid69217 00:19:55.616 Removing: /var/run/dpdk/spdk_pid70712 00:19:55.616 Removing: /var/run/dpdk/spdk_pid70971 00:19:55.616 Removing: /var/run/dpdk/spdk_pid71117 00:19:55.616 Removing: /var/run/dpdk/spdk_pid72615 00:19:55.616 Removing: /var/run/dpdk/spdk_pid72874 00:19:55.616 Removing: /var/run/dpdk/spdk_pid73025 00:19:55.616 Removing: /var/run/dpdk/spdk_pid74505 00:19:55.616 Removing: /var/run/dpdk/spdk_pid74998 00:19:55.616 Removing: /var/run/dpdk/spdk_pid75142 00:19:55.616 Removing: /var/run/dpdk/spdk_pid75287 00:19:55.616 Removing: /var/run/dpdk/spdk_pid75716 00:19:55.616 Removing: /var/run/dpdk/spdk_pid76435 00:19:55.616 Removing: /var/run/dpdk/spdk_pid76811 00:19:55.616 Removing: /var/run/dpdk/spdk_pid77495 00:19:55.616 Removing: /var/run/dpdk/spdk_pid77930 00:19:55.616 Removing: /var/run/dpdk/spdk_pid78678 00:19:55.616 Removing: /var/run/dpdk/spdk_pid79076 00:19:55.616 Removing: /var/run/dpdk/spdk_pid81034 00:19:55.616 Removing: /var/run/dpdk/spdk_pid81478 00:19:55.616 Removing: /var/run/dpdk/spdk_pid81917 00:19:55.616 Removing: /var/run/dpdk/spdk_pid83999 00:19:55.616 Removing: /var/run/dpdk/spdk_pid84488 00:19:55.616 Removing: /var/run/dpdk/spdk_pid85008 00:19:55.616 Removing: /var/run/dpdk/spdk_pid86071 00:19:55.616 Removing: /var/run/dpdk/spdk_pid86393 00:19:55.616 Removing: /var/run/dpdk/spdk_pid87336 00:19:55.616 Removing: /var/run/dpdk/spdk_pid87659 00:19:55.616 Removing: /var/run/dpdk/spdk_pid88602 00:19:55.616 Removing: /var/run/dpdk/spdk_pid88925 00:19:55.616 Removing: /var/run/dpdk/spdk_pid89611 00:19:55.616 Removing: /var/run/dpdk/spdk_pid89896 00:19:55.616 Removing: /var/run/dpdk/spdk_pid89959 00:19:55.616 Removing: /var/run/dpdk/spdk_pid90005 00:19:55.616 Removing: /var/run/dpdk/spdk_pid90250 00:19:55.616 Removing: /var/run/dpdk/spdk_pid90432 00:19:55.616 Removing: /var/run/dpdk/spdk_pid90527 00:19:55.616 Removing: /var/run/dpdk/spdk_pid90620 00:19:55.616 Removing: /var/run/dpdk/spdk_pid90673 00:19:55.616 Removing: /var/run/dpdk/spdk_pid90704 00:19:55.616 Clean 00:19:55.876 04:08:52 -- common/autotest_common.sh@1453 -- # return 0 00:19:55.876 04:08:52 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:55.876 04:08:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.876 04:08:52 -- common/autotest_common.sh@10 -- # set +x 00:19:55.876 04:08:52 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:55.876 04:08:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.876 04:08:52 -- common/autotest_common.sh@10 -- # set +x 00:19:55.876 04:08:52 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:55.876 04:08:52 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:55.876 04:08:52 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:55.876 04:08:52 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:55.876 04:08:52 -- spdk/autotest.sh@398 -- # hostname 00:19:55.876 04:08:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:56.136 geninfo: WARNING: invalid characters removed from testname! 00:20:18.108 04:09:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:20.650 04:09:16 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:22.559 04:09:18 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:24.468 04:09:20 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:26.429 04:09:22 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:28.971 04:09:25 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:30.883 04:09:27 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:30.883 04:09:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:30.883 04:09:27 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:30.883 04:09:27 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:30.883 04:09:27 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:30.883 04:09:27 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:30.883 + [[ -n 5433 ]] 00:20:30.883 + sudo kill 5433 00:20:30.894 [Pipeline] } 00:20:30.910 [Pipeline] // timeout 00:20:30.915 [Pipeline] } 00:20:30.929 [Pipeline] // stage 00:20:30.935 [Pipeline] } 00:20:30.951 [Pipeline] // catchError 00:20:30.960 [Pipeline] stage 00:20:30.962 [Pipeline] { (Stop VM) 00:20:30.973 [Pipeline] sh 00:20:31.256 + vagrant halt 00:20:33.796 ==> default: Halting domain... 00:20:41.944 [Pipeline] sh 00:20:42.227 + vagrant destroy -f 00:20:44.770 ==> default: Removing domain... 00:20:44.784 [Pipeline] sh 00:20:45.069 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:45.080 [Pipeline] } 00:20:45.098 [Pipeline] // stage 00:20:45.104 [Pipeline] } 00:20:45.120 [Pipeline] // dir 00:20:45.125 [Pipeline] } 00:20:45.141 [Pipeline] // wrap 00:20:45.149 [Pipeline] } 00:20:45.163 [Pipeline] // catchError 00:20:45.174 [Pipeline] stage 00:20:45.176 [Pipeline] { (Epilogue) 00:20:45.190 [Pipeline] sh 00:20:45.475 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:49.690 [Pipeline] catchError 00:20:49.692 [Pipeline] { 00:20:49.706 [Pipeline] sh 00:20:49.995 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:49.995 Artifacts sizes are good 00:20:50.057 [Pipeline] } 00:20:50.071 [Pipeline] // catchError 00:20:50.082 [Pipeline] archiveArtifacts 00:20:50.090 Archiving artifacts 00:20:50.190 [Pipeline] cleanWs 00:20:50.202 [WS-CLEANUP] Deleting project workspace... 00:20:50.202 [WS-CLEANUP] Deferred wipeout is used... 00:20:50.209 [WS-CLEANUP] done 00:20:50.211 [Pipeline] } 00:20:50.227 [Pipeline] // stage 00:20:50.232 [Pipeline] } 00:20:50.247 [Pipeline] // node 00:20:50.252 [Pipeline] End of Pipeline 00:20:50.295 Finished: SUCCESS